<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sara Miteva</title>
    <description>The latest articles on DEV Community by Sara Miteva (@saramiteva).</description>
    <link>https://dev.to/saramiteva</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/saramiteva"/>
    <language>en</language>
    <item>
      <title>The Advent of Monitoring, Day 3: Easy Monitoring for Self-Hosted Projects with Checkly</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Thu, 14 Dec 2023 13:42:33 +0000</pubDate>
      <link>https://dev.to/checkly/the-advent-of-monitoring-day-3-easy-monitoring-for-self-hosted-projects-with-checkly-38c9</link>
      <guid>https://dev.to/checkly/the-advent-of-monitoring-day-3-easy-monitoring-for-self-hosted-projects-with-checkly-38c9</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the third part of our 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was written by Daniel Paulus, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When it comes to running self-hosted services or side projects, monitoring is key. But, who has the time to set up a complex monitoring system?&lt;/p&gt;

&lt;p&gt;We want to deliver cool software and not be busy with configuring Prometheus servers or Grafana Dashboards. That’s where synthetic monitoring fits in perfectly – it's straightforward to set up and reliably keeps an eye on things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Multi-Step API Checks
&lt;/h2&gt;

&lt;p&gt;I love to use Checkly for this because of the great Developer Experience it gives me. It's particularly handy with its latest feature: multi-step API checks (still in beta). Multi step checks allow you to write code and use HTTP requests to implement an arbitrary number of requests, organized in steps, and custom logic to monitor any HTTP based service. This is ideal for more advanced monitoring needs without getting too complicated.&lt;/p&gt;

&lt;p&gt;The task at hand was to set up basic monitoring for my self-hosted ClickHouse database. For those who want to jump straight in, all the necessary code is available on &lt;a href="https://github.com/danielpaulus/clickhouse-monitoring"&gt;GitHub&lt;/a&gt;. Just clone it, set up ch_pass (clickhouse password), ch_user (clickhouse user name) and ch_url (’&lt;a href="https://IP:port%E2%80%99"&gt;https://IP:port’&lt;/a&gt;) account variables in your checkly account and run &lt;code&gt;npx checkly deploy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you prefer a step-by-step guide, here’s how I did it:&lt;/p&gt;

&lt;p&gt;We will use Clickhouse’ HTTP interface to run queries against it to check the database health.&lt;/p&gt;

&lt;p&gt;First, create a Checkly project. It's as easy as running &lt;code&gt;npm create checkly&lt;/code&gt; and following the instructions. Then, you'll need four files in your &lt;strong&gt;checks&lt;/strong&gt; directory:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;clickhouse.check.ts:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { MultiStepCheck, Frequency } from "checkly/constructs";
import path from "node:path";

//this defines a check. The actual code is in the spec files
new MultiStepCheck("clickhouse-check-1", {
  name: "Clickhouse Version Check",
  runtimeId: "2023.09",
  frequency: Frequency.EVERY_10M,
  locations: ["us-east-1", "eu-west-1"],
  code: {
    entrypoint: path.join(__dirname, "clickhouse.spec.ts"),
  },
});

new MultiStepCheck("clickhouse-check-2", {
  name: "Clickhouse Free Diskspace",
  runtimeId: "2023.09",
  frequency: Frequency.EVERY_12H,
  locations: ["us-east-1", "eu-west-1"],
  code: {
    entrypoint: path.join(__dirname, "clickhouse-disk.spec.ts"),
  },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;clickhouse.spec.ts contains a very basic set of checks. First it pings the Clickhouse health endpoint to check if the server is running at all. If that succeeds, we run the most basic query, a SELECT version() and make sure the version string is returned as expected. Now we know that Clickhouse is responsive and ready to go.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { test, expect } from "@playwright/test";
import { baseUrl, queryClickhouse } from "./utils";

const clickHouseVersion = "23.9.1.1854";

async function pingClickhouse(request) {
  const response = await request.get(`${baseUrl}/ping`);
  expect(response.ok()).toBeTruthy();
  const msg = await response.text();
  expect(msg.trim()).toBe("Ok.");
}

test("check clickhouse", async ({ request }) =&amp;gt; {
  await test.step("ping clickhouse", async () =&amp;gt; {
    await pingClickhouse(request);
  });

  await test.step("check clickhouse version", async () =&amp;gt; {
    const response = await queryClickhouse("SELECT version()", request);
    expect(response[0]).toBe(clickHouseVersion);
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;clickhouse-disk.spec.ts is a very simple check to make sure Clickhouse has at least 5GB of free disk space left. If not, the check will fail and we will get an alert from Checkly. Clickhouse offers a lot of interesting metrics through it’s query interface so you can monitor many other things. Take a look: &lt;a href="https://clickhouse.com/blog/clickhouse-debugging-issues-with-system-tables"&gt;https://clickhouse.com/blog/clickhouse-debugging-issues-with-system-tables&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { test, expect } from "@playwright/test";
import { queryClickhouse } from "./utils";

const diskSpaceQuery = `
SELECT 
    name, 
    free_space / 1024 / 1024 / 1024 AS free_space_gb
FROM system.disks;

`;

test("check clickhouse", async ({ request }) =&amp;gt; {
  await test.step("check clickhouse diskspace", async () =&amp;gt; {
    const response = await queryClickhouse(diskSpaceQuery, request);
    const row = response[0].split("\t");
    console.log(`Disk ${row[0]} has ${row[1]}GB free`);
    //make sure we got more than 5GB left
    expect(parseFloat(row[1])).toBeGreaterThan(5);
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;utils.js contains a very simple Clickhouse response parser.. I know there is a lot of room for improvement here. But you know, I want to build apps not monitoring ;-) As you can see, you need to set up the environment variables for user name, password and the Clickhouse http url in your Checkly env variables for this to work.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { expect } from "@playwright/test";
export const baseUrl = process.env.ch_url;

export async function queryClickhouse(query, request) {
  const buff = Buffer.from(query, "utf-8");
  const response = await request.post(`${baseUrl}`, {
    headers: {
      "X-ClickHouse-User": process.env.ch_user,
      "X-ClickHouse-Key": process.env.ch_pass,
    },
    data: buff,
  });

  const buf = await response.body();
  expect(response.ok()).toBeTruthy();
  const res = new String(buf).toString();
  const resArray = res.split("\n");
  resArray.pop();
  return resArray;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have your files ready, run &lt;code&gt;npx checkly login&lt;/code&gt; to login to your account and then give it a try by running &lt;code&gt;npx checkly test&lt;/code&gt;. If that works and you Clickhouse cluster responds, it is time to unite testing with monitoring and deploy your checks to Checkly for continuous active monitoring. Deploy your checks using &lt;code&gt;npx checkly deploy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And we’re done! This setup will continuously monitor your ClickHouse database, ensuring it’s up and running all the time. How cool is it that you can put the monitoring code next to your application code and deploy changes that easily?&lt;/p&gt;

&lt;p&gt;In case your service does encounter an issue, it's important to get notified quickly. Checkly offers various alerting methods, including SMS, phone calls, Slack, and Pagerduty. You can set these up by checking out &lt;a href="https://www.checklyhq.com/docs/alerting-and-retries/alert-channels/"&gt;Checkly's Alert Channels documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With Checkly, setting up monitoring for your self-hosted projects is a breeze, giving you both peace of mind and more time to focus on other aspects of your projects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the third part of our 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was written by Daniel Paulus, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
    </item>
    <item>
      <title>The Advent of Monitoring, Day 2: Debugging Dashboard Outages with Checkly's API Checks</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Tue, 12 Dec 2023 17:39:38 +0000</pubDate>
      <link>https://dev.to/checkly/the-advent-of-monitoring-day-2-debugging-dashboard-outages-with-checklys-api-checks-21gp</link>
      <guid>https://dev.to/checkly/the-advent-of-monitoring-day-2-debugging-dashboard-outages-with-checklys-api-checks-21gp</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the second part of our 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Daniel Paulus, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We encountered a tricky issue with our public dashboards: they were experiencing sporadic outages, happening about once every two days. The infrequency and unpredictability of these outages made them particularly challenging to diagnose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Hypothesis
&lt;/h2&gt;

&lt;p&gt;Initially, we tried to correlate the outages with our logs, looking at the failure times reported by our browser check failures. Unfortunately, this method didn't yield any useful insights.&lt;/p&gt;

&lt;p&gt;This is when we tried to use high-frequency API checks, running every 10 seconds. Our existing setup involved a browser check that ran hourly, which was sufficient under normal circumstances but not detailed enough to catch these intermittent issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  ChatGPT to the Rescue
&lt;/h3&gt;

&lt;p&gt;The issue with our public dashboards is, of course, already fixed for a long time, so to simulate a similar scenario, I used ChatGPT4 to generate a simple nodeJS server for me using the following prompt:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Write a simple nodejs based rest endpoint that always returns a body "OK" and&lt;br&gt;
a 200 status code without using a framework. &lt;br&gt;
The endpoint should return right away. &lt;br&gt;
There should be a loop that runs every 10s and with a 0.1% chance,&lt;br&gt;
will cause the endpoint to return HTTP 500 for a time window of 30-60 seconds.&lt;br&gt;
Afterwards it will go back to returning 200.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now all I had to do was to save the resulting code to a file and then run it with NodeJS. To publicly expose it for testing purposes a simple&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ngrok http 3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;was the perfect solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring API Checks with Checkly
&lt;/h2&gt;

&lt;p&gt;Next, I configured two API checks with Checkly, one running every 5 minutes and one running every 10 seconds.&lt;/p&gt;

&lt;p&gt;The screenshots really illustrate the benefits of temporarily or permanently using higher frequency checks. The 5 min frequency check did not detect all the error windows, and also does not help with understanding how long the service was down. Things might seem much better here compared to when you start probing the API every 10s.&lt;/p&gt;

&lt;p&gt;5 minutes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tTxIAZiv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/147me1klljqmk6cxsawj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tTxIAZiv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/147me1klljqmk6cxsawj.png" alt="monitoring results after 5 minutes" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10 seconds:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PiXJbXmT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dx68sy6irncar916ygz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PiXJbXmT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dx68sy6irncar916ygz.png" alt="monitoring results after 10 seconds" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results are quite revealing. The high-frequency API check shows almost exactly when the issue started, when it stopped and how long it took. It identifies all the 30-60s random errors. In real-world scenarios, this makes it much easier to find the correct logs and more importantly, also understand how big the problem actually is.&lt;/p&gt;

&lt;p&gt;By leveraging Checkly's API checks for high-resolution failure timing, we were able to identify and subsequently address a problem that our standard monitoring approach had missed. This experience underscored the importance of synthetic monitoring with higher frequency checks, especially for issues that occur sporadically. With Checkly's help, we were able to root cause and fix the dashboard's sporadic outages and could enhance the reliability of our service.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the second part of our 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Daniel Paulus, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
      <category>api</category>
    </item>
    <item>
      <title>The Advent of Monitoring, Day 1: What Are Synthetics and Why They Are Needed</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Mon, 11 Dec 2023 15:55:31 +0000</pubDate>
      <link>https://dev.to/checkly/the-advent-of-monitoring-day-1-what-are-synthetics-and-why-they-are-needed-2f4i</link>
      <guid>https://dev.to/checkly/the-advent-of-monitoring-day-1-what-are-synthetics-and-why-they-are-needed-2f4i</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the first part of Checkly's 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by &lt;strong&gt;Daniel Paulus&lt;/strong&gt;, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hey there! Here is my take on what &lt;strong&gt;&lt;em&gt;synthetic monitoring&lt;/em&gt;&lt;/strong&gt; means and why it’s awesome!&lt;/p&gt;

&lt;p&gt;I think it’s a very complicated word for a very straightforward concept. In fact, I am convinced, that once you've used it, &lt;em&gt;you will never want to live without it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Let’s start first with &lt;strong&gt;e2e testing&lt;/strong&gt;: In essence, what you do is, define a series of steps interacting with your web app or REST API to simulate real user behavior.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;web apps&lt;/strong&gt;, that means your test clicks buttons and types text while making assertions along the way.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;REST APIs&lt;/strong&gt;, you check that the actual HTTP requests are fast enough and contain the expected responses. This way, you make sure &lt;strong&gt;core user flows are working as expected&lt;/strong&gt; when you test your code before deploying it to production.&lt;/p&gt;

&lt;p&gt;Any refactoring that might break a ton of unit tests will not affect e2e tests unless the core user flow is actually broken. So, they give you the ultimate signal if your system behaves as it should. If you use a modern e2e testing framework like Playwright, your tests will also be quite fast and not flaky, unlike Selenium or other old testing tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mf9MqKOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tetcazqjn2sk9m48gdew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mf9MqKOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tetcazqjn2sk9m48gdew.png" alt="e2e testing" width="436" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does all of this relate to synthetics?
&lt;/h2&gt;

&lt;p&gt;Well, the general idea is to &lt;strong&gt;promote those e2e tests you have created, to continuously run against your production services&lt;/strong&gt;! Now you have the best signal there is to understand if users can actually use your service, all the time, in your production environment. You will know when things break before your users inform you. Consider also the following questions you are finally getting answers to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The unit tests all worked, but &lt;strong&gt;does the feature actually work on production&lt;/strong&gt;?&lt;/li&gt;
&lt;li&gt;Is the auth provider having an outage and &lt;strong&gt;people cannot log in&lt;/strong&gt;, although your own software is correct?&lt;/li&gt;
&lt;li&gt;Is any other &lt;strong&gt;third-party dependency&lt;/strong&gt; that is slow or broken?&lt;/li&gt;
&lt;li&gt;It works in Germany, but does it work in California? Is there some &lt;strong&gt;regional blocking in place&lt;/strong&gt;?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Take a look at the screenshot below, for example. You can immediately spot which part of the system is affected. Compare that to a list of error logs or multiple metrics dashboards and you will understand what I mean when I say synthetics give you a clear signal compared to all the noise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mjQg_9W9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl5hl51g81spau7ldbs9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mjQg_9W9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl5hl51g81spau7ldbs9.png" alt="spotting a monitoring alert" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ultimately, synthetic monitoring means monitoring through simulated users, or testing in production, and is like your app's guardian angel. It's always there, quietly keeping watch, ensuring everything runs smoothly for your users.&lt;/p&gt;

&lt;p&gt;For those in the software development scene, it's an invaluable ally, keeping your app top-notch and your users happy. With a modern tool like Checkly, leveraging &lt;strong&gt;Monitoring as Code&lt;/strong&gt;, you can repurpose your existing e2e tests to run against production by writing them twice.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the first part of Checkly's 12-day Advent of Monitoring series. In this series, Checkly's engineers will share practical monitoring tips from their own experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by &lt;strong&gt;Daniel Paulus&lt;/strong&gt;, Checkly's Director of Engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Common DevSecOps Challenges and How to Overcome Them</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Tue, 18 May 2021 09:37:01 +0000</pubDate>
      <link>https://dev.to/microtica/common-devsecops-challenges-and-how-to-overcome-them-2nko</link>
      <guid>https://dev.to/microtica/common-devsecops-challenges-and-how-to-overcome-them-2nko</guid>
      <description>&lt;p&gt;DevOps has changed the way developers and operating engineers think. The DevOps paradigm has transformed the software and technology development process. As a result, improving performance and delivering faster outcomes have become the standard of meeting the market's demands.&lt;/p&gt;

&lt;p&gt;However, as the infrastructure evolves, security has become a new concern. Developers are now working to address it on a regular basis. Security professionals have had to look at options that can implement security mechanisms through the DevOps process, tackling the entire implementation cycle. This aims to prevent and mitigate security threats as they emerge across the software development process.&lt;/p&gt;

&lt;p&gt;DevOps implementation, if performed correctly, can yield positive benefits for any company. This includes improved team coordination, quicker time to market, increased total efficiency, increased customer loyalty, and many others. But, without security in mind, you can lose each one of them in the blink of an eye. &lt;/p&gt;

&lt;p&gt;That’s why, to be safer, we’ll add a “Sec” in DevOps. This article will focus on the DevSecOps methodology, and the DevSecOps challenges you might encounter when implementing it into your processes. &lt;/p&gt;

&lt;h2&gt;What is DevSecOps? &lt;/h2&gt;

&lt;p&gt;DevSecOps stands for &lt;em&gt;development, security, &lt;/em&gt;and &lt;em&gt;operations. &lt;/em&gt;It automates security deployment during the product development lifecycle, from original design to configuration, testing, implementation, and software delivery.&lt;/p&gt;

&lt;p&gt;Any phase of the DevOps process should include security. The creation, plan, construct, test, launch, maintenance, and beyond. DevSecOps refers to the type of security you implement into the DevOps process. This concept improves stability by enhancing coordination and mutual responsibility across the entire DevOps workflow.&lt;/p&gt;

&lt;p&gt;DevSecOps is a gradual and inevitable progression of the way development teams think about protection. A special security team used to handle the security of applications at the end of the implementation stage. Then, a separate quality assurance (QA) team reviewed it.&lt;/p&gt;

&lt;p&gt;With DevSecOps, you integrate security in the agile and DevOps processes. It responds to security problems when they arise, as they are simpler, quicker, and less costly to resolve. Furthermore, rather than being the exclusive concern of a security team, DevSecOps makes application and infrastructure security a joint responsibility of production, security, and operations teams.&lt;/p&gt;

&lt;h2&gt;DevSecOps Challenges&lt;/h2&gt;

&lt;p&gt;Implementing DevSecOps comes with a number of challenges. Here are some of them: &lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;The cultural shift&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;The most significant roadblock that most organizations face in adopting a DevSecOps strategy is the resistance they may encounter. Many people would be unable to make a significant adjustment to what they've been doing for years. And the impression that security was an optional extra of previous app development methods doesn't make things easier. &lt;/p&gt;

&lt;p&gt;Another common stumbling block is the perception that improved protection slows things down and prevents creativity. Developers want to produce code quickly in order to satisfy the needs of modern companies. Security departments, on the other hand, are mostly concerned with ensuring that the code is safe. Since their goals are too dissimilar, it's difficult for these two teams to function together.&lt;/p&gt;

&lt;p&gt;That’s why thorough preparations for both development and security professionals will remove some of the cultural challenges and get the teams on board with the new processes. Getting everyone on board and developing new practices that will work for all team members are two crucial things to do before making the shift. &lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Lack of knowledge&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;In addition to cultural preparations, professional development and education are also important. Research by &lt;a href="https://resources.securitycompass.com/blog/devsecops-challenges-and-drivers"&gt;Security Compass&lt;/a&gt; shows that the lack of education/awareness about security and compliance is one of the most common DevSecOps challenges when it comes to implementation, with 38% of the respondents highlighting it. &lt;/p&gt;

&lt;p&gt;Start with formal in-house training that will raise awareness about security within your team. The most experienced security professionals should mentor other team members and help them level up their security game. Finally, provide your developers with online courses. They can watch them whenever they feel comfortable with the goal to learn how to address particular security issues. &lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Complex tool integrations&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;The majority of DevOps toolchains come from various vendors. Source code management, CI/CD, build tools, binary libraries, code review, and problem monitoring tools are chosen by teams based on their specific requirements. &lt;/p&gt;

&lt;p&gt;Adding security tools makes things even more complex. Static application security testing (SAST), software composition analysis (SCA), and some kind of dynamic testing techniques are typically used in security analysis. Developers need a complete picture of the problems. However, it can be difficult to combine and reconcile results from different vendors' resources.&lt;/p&gt;

&lt;p&gt;Finding one tool that can address your security concerns is probably the best option. It will make things easier for developers, on an individual level, and for the entire organization. &lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Traditional security tools vs. agile DevOps&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;Many software security tools were built with the idea that an employee of the security team would run the checks, analyze the sometimes lengthy list of results, and then send the list back to the development team for improvements. &lt;/p&gt;

&lt;p&gt;This time-consuming, labor-intensive approach is incompatible with DevOps' high-speed, integrated, and automatic model. It also shows that incorporating protection into DevOps isn't enough. You need to use solutions that were built upon DevOps practices. These solutions are usually flexible and can easily be integrated into any existing agile process. &lt;/p&gt;

&lt;p&gt;That ensures that to be fully DevOps compliant, tests should run in the background without human interference. Moreover, they can implement security policies automatically so that developers can concentrate on the most critical issues.&lt;/p&gt;

&lt;p&gt;To enhance the security efforts of development teams, we’ve recently announced a new feature - the integrated &lt;a href="https://microtica.com/blog/common-devsecops-challenges-and-how-to-overcome-them/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=devsecops_challenges"&gt;Container Security Scan Reports in Microtica Pipelines&lt;/a&gt;. The system will perform an automated security scan on the container images and deliver the findings directly to the Portal’s UI. This feature is just the beginning of a series of security enhancements that are coming in Microtica. &lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;DevSecOps is a methodology for approaching IT security with the mentality that security is the concern of everyone.  It entails incorporating security practices into a company's DevOps system. The aim is to integrate protection into the software development process at any level. DevSecOps means you're not saving security until the end of the production period. This practice is in contrast to previous development models.&lt;/p&gt;

&lt;p&gt;If your company already uses DevOps, you should think about switching to DevSecOps. DevSecOps is based on the DevOps philosophy at its heart, which will help you shift easily. And by doing so, you'll be able to pull together skilled individuals from various strategic backgrounds to improve the current security processes.&lt;/p&gt;

&lt;p&gt;There's no denying that DevSecOps is changing the way businesses approach security. Many companies, though, are also wary about moving to DevSecOps for a number of factors, including a lack of knowledge of what DevSecOps is, an unwelcome cultural change for staff, budget restrictions, and often just the uncertainty of the concept.&lt;/p&gt;

&lt;p&gt;The technological and business advantages that companies will gain from adopting DevSecOps are extremely promising. While there will undoubtedly be some setbacks when you first begin, DevSecOps can be extremely beneficial to your company in the long run. Partnering up with a company that’s already skilled in DevSecOps can help you make the most out of it. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>cloudnative</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>A Step-by-Step Guide to AWS Instance Scheduler</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Tue, 27 Apr 2021 19:40:51 +0000</pubDate>
      <link>https://dev.to/microtica/a-step-by-step-guide-to-aws-instance-scheduler-14lj</link>
      <guid>https://dev.to/microtica/a-step-by-step-guide-to-aws-instance-scheduler-14lj</guid>
      <description>&lt;p&gt;AWS Instance Scheduler is a popular option for saving up a large portion of the cost of computing services in situations where there are &lt;strong&gt;predictable planned times for operating compute services&lt;/strong&gt;. In other words, since no clients are accessing particular environments during the period, it's normal for development environments or workloads to be shut down during non-working times. &lt;/p&gt;

&lt;p&gt;By evaluating when the instances are more widely used, you can implement more complex schedules, or even apply an always-stopped schedule and then start up the instances when you need them. &lt;/p&gt;

&lt;p&gt;In this article, we will cover a step-by-step guide to creating an AWS schedule and apply it to several instances. At the very end, there is a section on how to do this in less than 5 mins, without having to set up and manage the infrastructure for the scheduler. &lt;/p&gt;

&lt;h2&gt;Solution Overview&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FZ0mcI0cjac153FZKafomCfrfU2hLACcZQ7TJU7_fjh83aZemqYckVYv2kRmZj_kC9HWzF1k0h3-eoXQta4jAw-zVzYZd-rPVB2AfWycMivsgKb3LPawsKJyYROEtM0qBYLwXaAU8" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FZ0mcI0cjac153FZKafomCfrfU2hLACcZQ7TJU7_fjh83aZemqYckVYv2kRmZj_kC9HWzF1k0h3-eoXQta4jAw-zVzYZd-rPVB2AfWycMivsgKb3LPawsKJyYROEtM0qBYLwXaAU8" alt="AWS-Instance-Scheduler-Architecture"&gt;&lt;/a&gt;AWS Instance Scheduler Architecture&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://aws.amazon.com/solutions/implementations/instance-scheduler/" rel="noreferrer noopener"&gt;CloudFormation template&lt;/a&gt; creates an environment for the AWS Instance Scheduler. The solution uses the following AWS services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudwatch/" rel="noreferrer noopener"&gt;Amazon CloudWatch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noreferrer noopener"&gt;AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noreferrer noopener"&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/" rel="noreferrer noopener"&gt;AWS RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noreferrer noopener"&gt;AWS EC2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A CloudWatch event triggers the Lambda function whose job is to check the EC2 and RDS instances, find the appropriate tag for a schedule, and match it against the definition of that schedule saved in DynamoDB. After that, depending on the targeted state the Lambda function decides to turn off or turn on the EC2 or RDS instance.&lt;/p&gt;

&lt;p&gt;The default behavior of this solution is to stop EC2 and RDS instances, not terminate or delete them, as terminated instances cannot be restarted. It is also a good idea to create an RDS Instance Snapshot before stopping the instances. Each new snapshot overrides the last one. This is part of setting up the stack and is explained in the next section.   &lt;/p&gt;

&lt;h2&gt;How it works&lt;/h2&gt;

&lt;p&gt;Schedules actually define when the EC2 and RDS instances should be running. Each schedule has a unique name and its configuration is stored in the DynamoDB. &lt;/p&gt;

&lt;p&gt;The entire setup revolves around tagging, which is basically a label you give to an instance to be able to categorize it and later quickly identify it. Tags are defined with a &lt;em&gt;key-value&lt;/em&gt; pair which you define. &lt;/p&gt;

&lt;p&gt;The AWS Instance Scheduler solution has a default &lt;em&gt;tag key&lt;/em&gt; called &lt;strong&gt;&lt;em&gt;Schedule&lt;/em&gt;&lt;/strong&gt;. You can of course change it if you need. The &lt;em&gt;tag value&lt;/em&gt; should be the &lt;strong&gt;&lt;em&gt;unique name&lt;/em&gt;&lt;/strong&gt; of the schedule you want to apply to the instance. Each time the Lambda function runs, it will retrieve the configuration for the schedule from the DynamoDB and apply it to the instance. &lt;/p&gt;

&lt;p&gt;For example you could define a tag with &lt;em&gt;key: value&lt;/em&gt; = &lt;strong&gt;&lt;em&gt;Schedule: work-days&lt;/em&gt;&lt;/strong&gt;. The name work-days specifies a schedule where the instances should shut down on Friday evening and get started up on Monday morning. Therefore, the instances that are supposed to be covered by this schedule should be tagged with the &lt;em&gt;key: value&lt;/em&gt; pair. In the example this is &lt;strong&gt;&lt;em&gt;Schedule: work-days&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;Create an Instance Scheduler in your AWS Account&lt;/h2&gt;

&lt;h3&gt;Launch the stack&lt;/h3&gt;

&lt;p&gt;Firstly make sure you’re signed in in your AWS account. The starting step is to launch the instance schedule stack in your AWS account. You can launch the cloud formation template by clicking &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?templateURL=https:%2F%2Fs3.amazonaws.com%2Fsolutions-reference%2Faws-instance-scheduler%2Flatest%2Finstance-scheduler.template" rel="noreferrer noopener"&gt;on this link&lt;/a&gt;. The template launches in the US East - N. Virginia region by default, so adjust the region according to your needs. Likewise, verify that the correct template is being used, and move to the next step. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FEJPNbaLoTbIhptLOJXOn7x46iLvW1M2wCXWW9cSNEzpOiob0gUjqZO9KE7ZiBifCvD3DDXbwipi42BkIVXRxyEZzuqZvFcHNlStSLG-q-qtY7oIMEfnXxeYqHIeWM9SKlGgMFMg8" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FEJPNbaLoTbIhptLOJXOn7x46iLvW1M2wCXWW9cSNEzpOiob0gUjqZO9KE7ZiBifCvD3DDXbwipi42BkIVXRxyEZzuqZvFcHNlStSLG-q-qtY7oIMEfnXxeYqHIeWM9SKlGgMFMg8" alt="Instance-Scheduler-stack-Step1"&gt;&lt;/a&gt;Creating the Instance Scheduler stack - Step 1&lt;/p&gt;

&lt;p&gt;Secondly, you need to define the &lt;strong&gt;&lt;em&gt;Stack name&lt;/em&gt;&lt;/strong&gt; and some specific parameters, like tag name (if you’d like to change the default tag name &lt;strong&gt;&lt;em&gt;Schedule&lt;/em&gt;&lt;/strong&gt;), which service you want to schedule: EC2, RDS or Both, whether you want and RDS instance snapshot before shutting down the resource, the frequency of running for the AWS Lambda function, whether you want CloudWatch metrics and logs and a couple more parameters.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F04%2Fscreencapture-console-aws-amazon-cloudformation-home-2021-04-22-22_41_46-759x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F04%2Fscreencapture-console-aws-amazon-cloudformation-home-2021-04-22-22_41_46-759x1024.png" alt="Instance-Scheduler-stack-Step2"&gt;&lt;/a&gt;Creating the Instance Scheduler stack - Step 2&lt;/p&gt;

&lt;p&gt;Thirdly, leave the stack options as they are - empty. Continue to the last step, review the changes and deploy the stack by clicking on Create. In the AWS Console you can follow the status of the stack deployment. &lt;strong&gt;CREATE_COMPLETE &lt;/strong&gt;means&lt;strong&gt; &lt;/strong&gt;that the solution is deployed. &lt;/p&gt;

&lt;h3&gt;Define the Schedule&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FZbsRnC0YoLPMfwq9tG9pT6Ld1XyHjf9jzdOW6ioU_mNySM2MWOyBr2lTYCThkNloR8IZY8B9fuLhhVqM0llhyKe6w_txWyEA6IbFYI30GLnUW_EWWBgQpx-pss7wtMgJDHWO6NiS" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FZbsRnC0YoLPMfwq9tG9pT6Ld1XyHjf9jzdOW6ioU_mNySM2MWOyBr2lTYCThkNloR8IZY8B9fuLhhVqM0llhyKe6w_txWyEA6IbFYI30GLnUW_EWWBgQpx-pss7wtMgJDHWO6NiS" alt="Deployed-Instance-Scheduler-stack"&gt;&lt;/a&gt;Deployed Instance Scheduler stack&lt;/p&gt;

&lt;p&gt;In your AWS console you will find an Amazon DynamoDB table (ConfigTable) where the configurations for the periods and schedules are going to be stored. The schedule is defined by several parameters, from which the most important is the &lt;strong&gt;period&lt;/strong&gt;. This is the time in which the instances should be active. It can be as specific as hours, days and months. For the schedule to work you must specify at least one of the period’s definitions: &lt;em&gt;begintime&lt;/em&gt;, &lt;em&gt;endtime&lt;/em&gt;, &lt;em&gt;weekdays&lt;/em&gt;, &lt;em&gt;months&lt;/em&gt; or &lt;em&gt;monthdays&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;begintime&lt;/em&gt; field defines when the instance schedule should start the instances, and accordingly the &lt;em&gt;endtime&lt;/em&gt; is when the instances should be stopped. If you define only a starting time then the instance will have to be stopped manually. &lt;/p&gt;

&lt;p&gt;In the ConfigTable are already some predefined samples that you can use or create your own. A sample period will look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FiFacWOwVkJ_M7ULdP1p7_6e7qXs6C6lEeV-ZFL9vfkK74r0OpX8qQlRc1pdwRKH4SaIYxfFDRId6kiSkf_yFzNFrT-kNWovbDnpVHKN1bXl7ni6JCByPA6FvlXt8qFhLqgrSb2QY" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FiFacWOwVkJ_M7ULdP1p7_6e7qXs6C6lEeV-ZFL9vfkK74r0OpX8qQlRc1pdwRKH4SaIYxfFDRId6kiSkf_yFzNFrT-kNWovbDnpVHKN1bXl7ni6JCByPA6FvlXt8qFhLqgrSb2QY" alt="Office-hours-sample-period"&gt;&lt;/a&gt;Office hours sample period&lt;/p&gt;

&lt;p&gt;This is a &lt;strong&gt;period&lt;/strong&gt; set to define office hours with a &lt;em&gt;begintime&lt;/em&gt; of 9 am and an &lt;em&gt;endtime&lt;/em&gt; of 5 pm, applied only Monday to Friday. This period is used in a schedule to define UK office hours. You can change the periods according to your needs directly from the AWS console. &lt;/p&gt;

&lt;p&gt;Schedules don’t have to be used only for shutting down resources. There are also some more complicated schedules you can create, for example if you look at the &lt;em&gt;Vertical scaling on weekdays &lt;/em&gt;schedule. Here you’ll see it’s purpose is to run on smaller EC2 instances on weekends and use micro instances on working days. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2F-ZmKJuLT7PKlvtEC4Cv9E5Y6rXmdbdg2k7y1tE_Qz7ieEAzbgx7p1jPkykTZMKqrykVp5RGmvCrXiWYP_iht6KPMrYXjcYWz5TMVUBNehRmgt-fHzHcn9BGVIxgqQ4Vla8rMXIjY" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2F-ZmKJuLT7PKlvtEC4Cv9E5Y6rXmdbdg2k7y1tE_Qz7ieEAzbgx7p1jPkykTZMKqrykVp5RGmvCrXiWYP_iht6KPMrYXjcYWz5TMVUBNehRmgt-fHzHcn9BGVIxgqQ4Vla8rMXIjY" alt="Vertical-scaling-on-weekdays-schedule"&gt;&lt;/a&gt;Vertical scaling on weekdays&lt;em&gt; &lt;/em&gt;schedule&lt;/p&gt;

&lt;h3&gt;Tag the appropriate instances&lt;/h3&gt;

&lt;p&gt;In order for a schedule to work, you need to tag the EC2/RDS instances you want the schedule to be applied on. You should use the &lt;strong&gt;&lt;em&gt;tag key &lt;/em&gt;&lt;/strong&gt;(the tag name you defined when creating the CFN stack - default value is Schedule) and the &lt;strong&gt;&lt;em&gt;tag value &lt;/em&gt;&lt;/strong&gt;- which is the name of the schedule stored in the Config table. When you find your resource you go to &lt;em&gt;Manage Tags&lt;/em&gt; and enter the key-value pair defining the schedule. You can also use the &lt;a href="https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html" rel="noreferrer noopener"&gt;Tag Editor&lt;/a&gt; to tag multiple instances at once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FZz2mdPoX_00C7EL7Y74aZaH49lnz-Shy-nARWQpq-okyDI9owRpMGU6FoSQu0OXp-9_K08HWi6VdOptUNIdFYeeIbSE3O7ZSnHhZ36c07Kkn1roJ5OCTjid-pDUkG7-vNqfHYBYJ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FZz2mdPoX_00C7EL7Y74aZaH49lnz-Shy-nARWQpq-okyDI9owRpMGU6FoSQu0OXp-9_K08HWi6VdOptUNIdFYeeIbSE3O7ZSnHhZ36c07Kkn1roJ5OCTjid-pDUkG7-vNqfHYBYJ" alt="tags-EC2-instance"&gt;&lt;/a&gt;Managing tags for an EC2 instance &lt;/p&gt;

&lt;h3&gt;CloudWatch Metrics&lt;/h3&gt;

&lt;p&gt;In conclusion, after the tag is applied the instance will be part of the resources managed by the schedule you defined for it. You can check the CloudWatch metrics to make sure the solution runs as it's designed to. In the CloudWatch console there will be a namespace &lt;strong&gt;&lt;em&gt;&amp;lt;stackname&amp;gt;:InstanceScheduler&lt;/em&gt;&lt;/strong&gt; (&lt;em&gt;DevInstanceScheduler:InstanceScheduler &lt;/em&gt;in our case). The Lambda function updates the metrics each time it runs for each instance it’s supposed to apply a schedule on. Moreover, here you get a sense of the number of instances that are running and the ones that are stopped. &lt;/p&gt;

&lt;h2 id="saving-schedule-microtica"&gt;Create a Saving Schedule in Microtica&lt;/h2&gt;

&lt;p&gt;If you don’t feel like going through the hustle of launching and deploying a solution in your AWS console, you can use Microtica’s Cloud Waste Manager feature. All you have to do is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://portal.microtica.com/register?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=aws_instance_scheduler&amp;amp;utm_content=instance-schedule" rel="noreferrer noopener"&gt;Sign up for free&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microtica.com/connect-an-aws-account?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=aws_instance_scheduler" rel="noopener noreferrer"&gt;Connect your AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Set permissions only for turn on/off of instances&lt;/li&gt;
&lt;li&gt;Go to &lt;a href="https://portal.microtica.com/tools/schedules?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=aws_instance_scheduler" rel="noreferrer noopener"&gt;Saving Schedules&lt;/a&gt; → Select an active time period for your instances&lt;/li&gt;
&lt;li&gt;Select the instances you want to apply the schedule to&lt;/li&gt;
&lt;li&gt;Activate&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FL6y0YP4Ndi8hhiynDpS60udiWS22L5tNVsiuV-8Yuhz9p43s_FN_odoozGDUjdMNpEov3fBPyV4eM5iTn8tsduoHxEpXfoP80EaRnOLiRwNQGnOf-mziqihyWpjCa-vcMyXTUOdB" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FL6y0YP4Ndi8hhiynDpS60udiWS22L5tNVsiuV-8Yuhz9p43s_FN_odoozGDUjdMNpEov3fBPyV4eM5iTn8tsduoHxEpXfoP80EaRnOLiRwNQGnOf-mziqihyWpjCa-vcMyXTUOdB" alt="saving-schedule-microtica"&gt;&lt;/a&gt;Create a saving schedule in Microtica&lt;/p&gt;



&lt;blockquote&gt;&lt;p&gt;&lt;em&gt;Check out our comprehensive guide on creating an &lt;/em&gt;&lt;a href="https://microtica.com/aws-cost-optimization?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noreferrer noopener"&gt;AWS cost optimization strategy&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Recommendations to improve your AWS cost optimization strategy</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Thu, 15 Apr 2021 10:55:06 +0000</pubDate>
      <link>https://dev.to/microtica/recommendations-to-improve-your-aws-cost-optimization-strategy-4dn3</link>
      <guid>https://dev.to/microtica/recommendations-to-improve-your-aws-cost-optimization-strategy-4dn3</guid>
      <description>&lt;p&gt;You need to take care of the economic model of the architecture while designing applications and workloads on AWS. Compared to on-premises data centers, it is necessary to look beyond the fundamental pricing benefits and explore ways to leverage the infrastructure successfully to lower your AWS charge.&lt;/p&gt;

&lt;p&gt;Regardless of whether you’re going to hire a FinOps professional or handle the process within the existing team, here are the best practices for AWS cost optimization. &lt;/p&gt;

&lt;h1&gt;
  
  
  Apply for AWS credits
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-credits-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-credits-2.png" alt="aws credits"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS credits are one of the most common ways to save on your AWS bill. They represent something similar to a coupon code, which can help you cover costs with AWS services. You can use them until you spend them all or until they expire. There are various ways to get AWS credits, and here are some of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/activate/" rel="noopener noreferrer"&gt;AWS Activate&lt;/a&gt; – for startups to set up infrastructure as quickly as possible&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/activate/founders/" rel="noopener noreferrer"&gt;AWS Activate Founders&lt;/a&gt; – for startups that haven’t raised any venture capital, seed, or angel funding&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/new/aws-promotional-credits" rel="noopener noreferrer"&gt;Publish Alexa skills&lt;/a&gt; – for each Alexa skill you publish, you can apply for $100 AWS promotional credits&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/events/" rel="noopener noreferrer"&gt;Attend AWS events and webinars&lt;/a&gt; – here you can find many opportunities for AWS credits&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/education/awseducate/" rel="noopener noreferrer"&gt;AWS Educate&lt;/a&gt; – educators earn $200 in AWS credits, while students can create a starter account with up to $100 in credits at a member institution &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/government-education/nonprofits/nonprofit-credit-program/" rel="noopener noreferrer"&gt;AWS for Nonprofits Credit Program&lt;/a&gt; – this program provides access to $2,000 for nonprofit organizations&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/education/edstart/faq/" rel="noopener noreferrer"&gt;AWS EdStart&lt;/a&gt; – for education technology startups&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc" rel="noopener noreferrer"&gt;AWS Free Tier&lt;/a&gt; – a program that includes 85 products for businesses to start building on AWS, explained in details in the next section&lt;/li&gt;
&lt;li&gt;Product Hunt – Product Hunt’s &lt;a href="https://www.producthunt.com/ship/aws" rel="noopener noreferrer"&gt;Ship platform&lt;/a&gt; allows startups to claim up to $7,500 in AWS credits. &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.joinsecret.com/offers/aws-activate-coupon-900" rel="noopener noreferrer"&gt;Use Secret deals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.f6s.com/deals/hosting/7962/up-to-25k-in-aws-web-hosting" rel="noopener noreferrer"&gt;F6S deals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.brex.com/rewards/" rel="noopener noreferrer"&gt;Brex&lt;/a&gt; – if you use Brex cards, there are many benefits, including up to $5,000 in AWS credits&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.startupschool.org/" rel="noopener noreferrer"&gt;Startup School&lt;/a&gt; – if you’re a startup, Startup School can bring you a deal with free AWS credits&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Utilize AWS Free Tier
&lt;/h1&gt;

&lt;p&gt;Cloud platforms provide a number of services for free in the beginning, but even free services have an upper limit. The services become billable as soon as consumers hit the cap. These services are used on a daily basis by people new to cloud computing and before they fully transition to the cloud environment. Many free cloud plans come with an expiry date and the payment period begins as soon as it ends.&lt;/p&gt;

&lt;p&gt;For example, Azure offers a free tier plan for a month, giving the possibility to run two small virtual machines with a storage capacity of 800GB. Google Cloud, on the other hand, offers $300 credit for a period of 12 months, when users can use services like Google App Engine or Google Compute Engine. &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc" rel="noopener noreferrer"&gt;AWS’s free tier plan&lt;/a&gt; lasts for 12 months, giving services like EC2, S3, and AWS RDS.&lt;/p&gt;

&lt;p&gt;The Free Tier extends to a limited number of AWS offerings and is subjected to a monthly consumption cap. The AWS Free Usage Tier is divided into three pricing models: a 12-month Free Tier, an Always Free offer, and brief trials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-free-tier-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-free-tier-2.png" alt="AWS free tier plans"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of the services available include 750 hours of &lt;a href="https://aws.amazon.com/ec2/?ec2-whats-new.sort-by=item.additionalFields.postDateTime&amp;amp;ec2-whats-new.sort-order=desc" rel="noopener noreferrer"&gt;Amazon EC2&lt;/a&gt; Linux, 750 hours of an &lt;a href="https://aws.amazon.com/elasticloadbalancing/?whats-new-cards-elb.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards-elb.sort-order=desc" rel="noopener noreferrer"&gt;Elastic Load Balancer&lt;/a&gt;, 750 hours of &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;Amazon RDS&lt;/a&gt; Single-AZ Micro DB Instances, 5 GB of &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt; standard storage, 10 &lt;a href="https://aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;Amazon Cloudwatch&lt;/a&gt; metrics with 1,000,000 API requests, and others. You can see all the services and limitations &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For example, the AWS Free Tier model &lt;a href="https://www.infoworld.com/article/3585757/how-to-make-the-most-of-the-aws-free-tier.html" rel="noopener noreferrer"&gt;was used at a college&lt;/a&gt; to teach students about web frameworks. However, you can use it for much more. This model can allow you to build and maintain a basic web application. &lt;a href="https://aws.amazon.com/getting-started/hands-on/build-web-app-s3-lambda-api-gateway-dynamodb/" rel="noopener noreferrer"&gt;This example by AWS&lt;/a&gt; can guide you through making an app with AWS Amplify, Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Moreover, you can connect it to a serverless backend and add interactivity with an API and database. &lt;/p&gt;

&lt;h1&gt;
  
  
  Choose the right AWS region
&lt;/h1&gt;

&lt;p&gt;When you set up your AWS modules, picking an AWS region is the first choice you have to make. Without picking a region, you can’t start working on the AWS Management Console, SDK or CLI. People usually choose the region according to distance, which is the most obvious choice. However, there are many other factors to consider. Here are some of them: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Costs – different regions have different AWS rates, check the &lt;a href="https://calculator.s3.amazonaws.com/index.html" rel="noopener noreferrer"&gt;cost calculator&lt;/a&gt; to count your costs for a particular region&lt;/li&gt;
&lt;li&gt;Latency – choose a region with a smaller latency to make the app more accessible to your target customers&lt;/li&gt;
&lt;li&gt;Security – check the regulations of each region before deciding to choose it&lt;/li&gt;
&lt;li&gt;Service availability – not all services are available to all regions, so make sure you know which ones you need before choosing a region&lt;/li&gt;
&lt;li&gt;AZ availability – also, not all regions have the same number of availability zones
The best solution would be to choose the factor that is the most important for you and use it as a guide to choose your particular AWS region. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use AWS Savings Plans
&lt;/h1&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/savingsplans/" rel="noopener noreferrer"&gt;AWS Savings Plan&lt;/a&gt; was introduced in November 2019, as a flexible pricing plan that allows consumers to save up to 72% on Amazon EC2 and AWS Fargate in return for a 1 or 3-year contract commitment to a consistent amount of compute use (e.g. $10/hour). &lt;/p&gt;

&lt;p&gt;You can start using this feature directly from the AWS Cost Explorer control console or using the AWS API/CLI. Here’s how you can pay: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On a monthly basis, with no upfront payment&lt;/li&gt;
&lt;li&gt;On a monthly basis, with paying at least half of the commitment price upfront&lt;/li&gt;
&lt;li&gt;Upfront, paying the entire commitment with one payment and achieving the highest savings
For example, if you commit to a usage of $10/hour, you get discount prices on all your usage up to $10 and any usage beyond this commitment will be charged at regular on-demand rates. There are two types of Savings Plans: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-savings-plans.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-savings-plans.png" alt="aws savings plans"&gt;&lt;/a&gt;&lt;br&gt;
Analyze &lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/cur-sp.html" rel="noopener noreferrer"&gt;AWS’s guides to the Savings Plan&lt;/a&gt; and try to choose the best setup for your specific case. &lt;/p&gt;

&lt;h1&gt;
  
  
  Analyze your AWS bill
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-bill-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-bill-2.png" alt="aws-bill"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/" rel="noopener noreferrer"&gt;Cost Explorer&lt;/a&gt;, &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/" rel="noopener noreferrer"&gt;Cost &amp;amp; Usage Report&lt;/a&gt;, &lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/" rel="noopener noreferrer"&gt;Trusted Advisor&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/solutions/latest/cost-optimization-monitor/welcome.html" rel="noopener noreferrer"&gt;Cost Optimization Monitor&lt;/a&gt; to analyze your AWS bill and see how you are spending your budget. Research all categories of your bill and understand what they mean. Contact AWS Support if you find anything you can’t understand, and they’ll help you find the answer. It will be easier to separate some kinds of expenses from others by using various AWS accounts for AWS entities and centralized billing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Single billing for all accounts
&lt;/h1&gt;

&lt;p&gt;Getting a single bill is very convenient for tracking expenses and monitoring spending if you have several accounts. This helps you get an overview of all AWS costs accrued across all your accounts with a consolidated view.&lt;/p&gt;

&lt;p&gt;There’s no extra charge for this service. In the unified billing family, the Master Account pays the costs that all the other accounts accumulate. You can easily trace the costs from each account, and the expense data can also be accessed in a CSV file.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create billing alarms
&lt;/h1&gt;

&lt;p&gt;To warn you when your AWS bill exceeds critical stages, generate billing alarms. Be sure you have many warning thresholds: when the bill rises a little bit, when the bill rises a lot, and when the budget is way over the limit.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use reserved instances optimization
&lt;/h1&gt;

&lt;p&gt;This option checks the usage history of Amazon EC2 computing and estimates an ideal size of partial cumulative reserved instances. Guidelines focus on hour-by-hour use of the preceding calendar month gathered across all combined billing accounts. It is an integral feature of cost optimization that helps you to estimate the number of hours of use you need this month depending on the sum of previous months. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Freserved-instances.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Freserved-instances.png" alt="reserved instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this option, you commit to purchasing a reservation for one or three years. There are three payment alternatives: full upfront, partial upfront, and no upfront. The last two allow you to pay the remaining balance monthly during the period. &lt;/p&gt;

&lt;h1&gt;
  
  
  Pay-as-you-go
&lt;/h1&gt;

&lt;p&gt;Pay-as-you-go is a straightforward idea that does not include minimum obligations or long-term contracts. You substitute low operating costs for the upfront capital spending and just compensate for what you need. There is no need to pay for unused space in advance or get fined for wrong estimations. This is one of the key cost optimizations of the service side inherent in AWS’s pricing strategy.&lt;/p&gt;

&lt;h1&gt;
  
  
  Turn off unused instances by creating schedules
&lt;/h1&gt;

&lt;p&gt;In order to optimize costs, it is crucial to shut down unused instances, particularly at the end of the working day or on weekends and vacations. For non-production instances such as those used for development, staging, monitoring, and QA, it is worth preparing on/off-hours. For example, implementing an “on” mode from 8.00 a.m. to 8.00 p.m. from Monday until Friday until Monday, large expense volumes can be avoided, particularly if production teams work during flexible hours.&lt;/p&gt;

&lt;p&gt;Through evaluating usage metrics to decide when the instances are more widely used, you can implement more intense schedules or apply an always-stopped schedule that you can disrupt when you need access to the instances. It is important to figure out that you are already paying for EBS quantities and other elements connected to them while instances are set to be off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microtica's Cloud Waste Manager
&lt;/h2&gt;

&lt;p&gt;Reducing cloud waste and reducing cloud costs is very easy with our tool Microtica. You create saving schedules so that resources or environments will turn off in the defined period. This is available for EC2, RDS instances, and auto-scaling groups. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fcreate-schedule-1536x1030.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fcreate-schedule-1536x1030.png" alt="Microtica cloud waste manager"&gt;&lt;/a&gt;&lt;/p&gt;
Create schedules for environment resources



&lt;p&gt;When this plan is enabled, all of these services will be labeled to be shut-down at the specified Stop time and “wake-up” at the defined Start time on the chosen days. There is a list of all your schedules with details about the effect of this schedule on the AWS account and the projected saving figures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2020%2F04%2FScreen-Shot-2020-04-17-at-12.11.37-copy-1536x542.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2020%2F04%2FScreen-Shot-2020-04-17-at-12.11.37-copy-1536x542.png" alt="saving schedules"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read the whole &lt;a href="https://microtica.com/blog/reduce-aws-costs-on-non-production-environments/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;guide on how to reduce costs in non-production environments&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Monitor and track your spendings
&lt;/h1&gt;

&lt;p&gt;There are many tools that could help you monitor and analyze your instance metrics. You can measure the workloads according to the gathered data and scale up or down the instance size. AWS Cost Explorer resource optimizer and AWS Compute Optimizer are some of these tools. &lt;/p&gt;

&lt;p&gt;Compute Optimizer looks at multiple parameters to be able to define some cost optimizations, like CPU, network I/O, disk, and memory. The Cost Explorer EC2 optimizer comes in handy as it considers whether you have reserved instances or not. This means that you will not have any savings as you have committed to pay an amount upfront for the instances. The Computer optimizer fails to do this connection, so it might give you a recommendation regardless of whether you have reserved or not.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2020%2F04%2FScreen-Shot-2020-04-02-at-11.46.43-1536x838.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2020%2F04%2FScreen-Shot-2020-04-02-at-11.46.43-1536x838.png" alt="Microtica cloud costs dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microtica’s cost explorer can show you the following data: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS’s estimated cost for the current month&lt;/li&gt;
&lt;li&gt;how the month-to-date spending acts&lt;/li&gt;
&lt;li&gt;a breakdown of last year’s cloud spending with a forecast for the coming year&lt;/li&gt;
&lt;li&gt;the AWS account that costs you the most&lt;/li&gt;
&lt;li&gt;the services which receive the most expenditure&lt;/li&gt;
&lt;li&gt;costs by tag of allocation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also get an overview of the accumulated estimated savings for the month. The data is based on the current active saving schedules and daily utilization hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out how we helped one tech company achieve &lt;a href="https://microtica.com/case-studies/reduce-aws-costs-on-non-production-environments/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;68% in AWS cost savings&lt;/a&gt;.&lt;/strong&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Choose the right storage class
&lt;/h1&gt;

&lt;p&gt;Amazon S3 provides &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html" rel="noopener noreferrer"&gt;six storage classes&lt;/a&gt;, each built for specific use cases and available at differing rates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 Standard: for frequently accessed data with low latency and high throughput performance.&lt;/li&gt;
&lt;li&gt;S3 Standard-Infrequent Access: for infrequently accessed data that needs rapid access at times. &lt;/li&gt;
&lt;li&gt;S3 One Zone-Infrequent Access: the difference between this class and S3 Standard-IA is that it stores data in a single AZ at a 20% lower cost, instead of a minimum of three AZs. &lt;/li&gt;
&lt;li&gt;S3 Intelligent-Tiering: transfers data to the most cost-effective access rate immediately without overhead control.&lt;/li&gt;
&lt;li&gt;S3 Glacier: long-term data archiving. &lt;/li&gt;
&lt;li&gt;S3 Glacier Deep Archive: long-term data archiving with access once or twice a year. 
The choice depends on your data needs and requirements, as well as your budget. Consider introducing object lifecycle management that moves data between the storage classes dynamically to optimize the cost of your data storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Intelligent tiering
&lt;/h1&gt;

&lt;p&gt;S3 Intelligent Tiering was created for teams that want to automatically adjust costs when data access patterns change, eliminating the risk of performance bottlenecks and overspending. The model automatically delivers cost savings by storing objects in two access tiers: frequent access and infrequent access. &lt;/p&gt;

&lt;p&gt;S3 Intelligent-Tiering tracks access habits and transfers objects that have not been accessed for 30 days to the infrequent access tier for a small monthly tracking and automation charge per object. In S3 Intelligent-Tiering, there are no retrieval costs. When an object from the infrequent access tier is accessed again, it is immediately transferred to the frequent access tier. As items are transferred between access levels within the S3 Intelligent-Tiering storage class, there are no additional tiering costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fs3-intelligent-tier-green-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fs3-intelligent-tier-green-2.png" alt="intelligent tiering"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Specify expiration dates
&lt;/h1&gt;

&lt;p&gt;AWS S3 allows you to define expiration dates for S3 objects, as well as rules to move objects to cheaper storage tiers. When the object reaches the expiration date, it has reached the end of its lifetime, so it’s removed asynchronously. This is known as the lifecycle expiration rule. As S3 doesn’t charge for the storage time of objects that have expired, this is a great way to eliminate some spendings that you don’t need. &lt;/p&gt;

&lt;p&gt;There are some rules though:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For S3 Intelligent-Tiering, S3 Standard-IA, or S3 One Zone-IA storage the minimum expiration limit is 30 days, so if you define an expiration of less than 30 days you are still charged for 30 days.&lt;/li&gt;
&lt;li&gt;For S3 Glacier storage the minimum is 90 days, so if you define it for less than 90 days to expire, you are still charged for 90 days.&lt;/li&gt;
&lt;li&gt;For S3 Glacier Deep Archive storage if you define expiration for less than 180 days, you are charged for 180 days.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Choose the right instance type
&lt;/h1&gt;

&lt;p&gt;Because multiple types of instances can cost varying amounts, it is essential to make sure that your team is using the most cost-effective ones. You have to try to pick the instance that fits the workload of the program best.&lt;/p&gt;

&lt;p&gt;When deciding variables such as the type of processing unit and the storage space required, remember your particular use case to optimize your workloads while reducing your spending. Configure the instance resource that produces price efficiency for the value being delivered. Review your choice of instances every few months to confirm they reflect the reality of your workload.&lt;/p&gt;

&lt;p&gt;To be able to pick the right size for a resource there is a combination of AWS tools you can use. AWS Cost Explorer resource optimizer and AWS Compute Optimizer are services that can help implement a right-sizing plan. &lt;/p&gt;

&lt;p&gt;The tools will observe your workload performance and capacity, like CPU and memory utilization and suggest instance types and sizes according to those parameters. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2Felementor%2Fthumbs%2Fec2-instance-types-p4qcyflmwx9y94l8fr5z0vzi3adg67bexlbk60uayk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2Felementor%2Fthumbs%2Fec2-instance-types-p4qcyflmwx9y94l8fr5z0vzi3adg67bexlbk60uayk.png" alt="EC2 instance types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Consider that development resources don’t need to be the same size as production instances. So here you could save significantly, by downsizing the non-production environments, but not having impact on the performance you need to get the job done. &lt;/p&gt;

&lt;p&gt;Categorizing your instance with tags can be a good solution too. It is possible to track the cost per hour of operating systems in real-time, measure them using tags, and these outcomes will motivate the production team to reduce costs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  A partnership between finance and technology teams
&lt;/h1&gt;

&lt;p&gt;This requires a cultural shift that will make finance and tech teams collaborate better. Cross-functional teams should work together to promote smoother implementation while gaining greater financial and corporate leverage at the same time. &lt;/p&gt;

&lt;p&gt;This partnership should remove barriers between the two teams, providing a better overview of finances for the tech team. On the other hand, the financial department should get a clear image of how the tech team allocates its resources.&lt;/p&gt;

&lt;p&gt;Engineering teams can more easily create better features, applications, and migrations. It also provides for a cross-functional debate of whether to invest and when. Often a company may expect to cut back on expenditures, while sometimes it chooses to invest more. Yet, teams have to know why the decisions are made.&lt;/p&gt;

&lt;p&gt;To establish a closer relationship between the finance and technology departments, some companies adopt FinOps. FinOps manages cloud finances, with the goal to add more financial transparency to the variable expenditure model used by the company. This provides more balance between speed, costs, and software quality for teams. &lt;/p&gt;

&lt;p&gt;FinOps enables all operating teams to access real-time data that they need to influence their spending and make wise decisions that ultimately lead to efficient optimization of cloud costs without impacting the final product’s performance, speed, and efficiency.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use AWS License Manager
&lt;/h1&gt;

&lt;p&gt;Companies seeking for an appropriate and constructive license management plan to remain consistent with license conditions, prevent costly over-provisioning, and make licensing true-ups and audits simpler by using existing software licenses. The AWS License Manager allows users to easily manage licenses in AWS and on-premise servers from various software providers. &lt;/p&gt;

&lt;p&gt;AWS License Manager gives administrators a consolidated view of license use so they can figure out how many permits they need and avoid buying more than they use. You will also monitor overpayments and escape license audit fines with this increased visibility. AWS License Manager is simple to use and saves time and money when it comes to monitoring and handling licenses.&lt;/p&gt;

&lt;h1&gt;
  
  
  How much impact do these recommendations have on your AWS cost optimization strategy?
&lt;/h1&gt;

&lt;p&gt;After elaborating on the recommendations, we want to see how much impact some of them could have. Moreover, we also estimated their complexity. This way, you can decide which recommendations to choose based on how much time you’ll have to spend incorporating them and the effect you’ll have from them. &lt;/p&gt;

&lt;p&gt;Simple and fast with low impact can be &lt;strong&gt;S3 intelligent tiering.&lt;/strong&gt; This takes around 10-15 mins to turn on. This monitors your object data access and automatically figures out whether the storage should be in regular S3 (which costs more) or in infrequent access (which costs less). &lt;/p&gt;

&lt;p&gt;Another simple but more impactful recommendation to do can be &lt;strong&gt;Savings Plan&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-cost-recommendations-matrix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-cost-recommendations-matrix.png" alt="aws recommendations impact"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, companies aren’t utilizing savings opportunities enough. For example, the Flexera 2021 State of the Cloud Report discovers that 52% of users use AWS Reserved Instances, while only 37% use AWS Spot Instances. However, they are quickly adopting AWS Savings Plan (30% in 2020). Organizations have to move quicker and more efficiently to achieve more savings and reduce their cloud waste. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fdiscount-types-aws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fdiscount-types-aws.png" alt="Discount types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Every year, cloud usage is on the rise because of the advantages of cloud computing. In addition, for enterprises, the impacts on cooperation, security, development, and revenue are evident. However, additional actions companies take can significantly boost cost-savings. &lt;/p&gt;

&lt;p&gt;Start by creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;AWS cost optimization strategy&lt;/a&gt;. To do this effectively, you first need to &lt;strong&gt;identify your existing costs.&lt;/strong&gt; Highlight those that are necessary and try cutting the rest. Then, &lt;strong&gt;define your cost optimization goals.&lt;/strong&gt; Conduct extensive research and study on the company and the objectives you wish to accomplish. Set targets for yourself on a weekly, quarterly, or annual basis, or on a specific date that works for you.&lt;/p&gt;

&lt;p&gt;After you’ve defined your goals, it’s time to take some action. &lt;strong&gt;Choose the activities&lt;/strong&gt; you’re going to take and prioritize them. Here is the list of the AWS cost optimization suggestions we mentioned in this article: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply for AWS credits&lt;/li&gt;
&lt;li&gt;Utilize AWS Free Tier&lt;/li&gt;
&lt;li&gt;Choose the right AWS region&lt;/li&gt;
&lt;li&gt;Use AWS Savings Plans&lt;/li&gt;
&lt;li&gt;Analyze your AWS bill&lt;/li&gt;
&lt;li&gt;Single billing for all accounts&lt;/li&gt;
&lt;li&gt;Create billing alarms&lt;/li&gt;
&lt;li&gt;Use reserved instances optimization&lt;/li&gt;
&lt;li&gt;Pay-as-you-go&lt;/li&gt;
&lt;li&gt;Turn off unused instances by creating schedules&lt;/li&gt;
&lt;li&gt;Monitor and track your spendings&lt;/li&gt;
&lt;li&gt;Choose the right storage class&lt;/li&gt;
&lt;li&gt;Intelligent tiering&lt;/li&gt;
&lt;li&gt;Specify expiration dates&lt;/li&gt;
&lt;li&gt;Choose the right instance type&lt;/li&gt;
&lt;li&gt;A partnership between finance and technology&lt;/li&gt;
&lt;li&gt;Use AWS License Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, &lt;strong&gt;monitor and measure your achievements&lt;/strong&gt;. To be able to track your results with your specified metrics, implement tools and dashboards. Create a system for evaluating and improving your plan by comparing the outcomes to your established objectives. &lt;/p&gt;

&lt;p&gt;And, &lt;strong&gt;don’t forget to iterate&lt;/strong&gt;. Not everything that works for others will work for you. Modify and adjust your actions until you get the perfect formula of what saves you from paying enormous cloud bills. &lt;/p&gt;

&lt;p&gt;You will realize long-term financial gains by taking measures to handle your cloud savings efficiently. This will help your business improve growth, repurpose more money for market research and development, and finally, for creating more user-oriented products and services.&lt;/p&gt;

&lt;p&gt;We hope these tips will help you create a smart and efficient AWS cost optimization strategy. Happy saving!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Costs - Explained</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Mon, 05 Apr 2021 13:00:20 +0000</pubDate>
      <link>https://dev.to/microtica/aws-costs-explained-3f58</link>
      <guid>https://dev.to/microtica/aws-costs-explained-3f58</guid>
      <description>&lt;p&gt;A successful approach to AWS cost optimization starts by gaining a thorough view of the existing costs, finding potential for cost optimization, and incorporating modifications. AWS and other providers of software have resources to help clients understand how they are spending.&lt;/p&gt;

&lt;p&gt;In this article, I'll provide a comprehensive guide on how to understand your AWS costs and needs.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are your data storage requirements?
&lt;/h1&gt;

&lt;p&gt;The first step is to consider the performance profile for each of your workloads in order to maximize storage. To calculate input/output operations per second (IOPS), throughput, and other metrics you need for this analysis, you can conduct a performance evaluation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar" rel="noopener noreferrer"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS storage services are configured for various situations related to storage. There’s no one data storage solution that is suitable for all workloads. Evaluate data storage solutions for each workload independently when determining the storage requirements.&lt;/p&gt;

&lt;p&gt;To do this efficiently, you should identify some key information. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How often do you access your data?&lt;/strong&gt; AWS has different pricing plans depending on how frequently you need to access data. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do you need high IOPS or throughput for your data store?&lt;/strong&gt; AWS offers data types that are efficient and performance-tailored. It can help you decide the correct amount of storage and prevent overpayment by recognizing IOPS and throughput specifications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How important is your data?&lt;/strong&gt; It is important to maintain vital or controlled data at almost any cost and needs to be retained for a long period.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How delicate is your data?&lt;/strong&gt; Highly confidential data needs to be shielded against unintentional and malicious modifications. Equally critical to remember are longevity, expense, and protection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How much data do you have?&lt;/strong&gt; This is basic information to determine the storage you need. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How temporary is your data?&lt;/strong&gt; You only need transient data briefly, requiring no durability. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What is your data storing budget?&lt;/strong&gt; This is also a critical factor when deciding which provider to choose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-storage-services.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Faws-storage-services.png" alt="aws storage services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  S3 storage classes
&lt;/h1&gt;

&lt;p&gt;S3 storage classes affect the availability, lifetime, and spending on objects stored in S3. Every S3 bucket can store objects with different classes, which can be modified and changed during their lifetime. Picking out the right storage class is crucial to achieving cost-effectiveness. The wrong storage class can lead to many unnecessary costs. &lt;/p&gt;

&lt;p&gt;Amazon S3 provides &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html" rel="noopener noreferrer"&gt;six storage classes&lt;/a&gt;, each built for specific use cases and available at differing rates. Each of them has a different cost per gigabyte. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Standard:&lt;/strong&gt; costs are based on object size. Store here the objects that you will be accessing frequently. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Standard-Infrequent Access:&lt;/strong&gt; costs are based on object size and retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 One Zone-Infrequent Access:&lt;/strong&gt; the difference between this class and S3 Standard-IA is that it stores data in a single AZ at a 20% lower cost, instead of a minimum of three AZs. However, this reduces availability.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Intelligent-Tiering:&lt;/strong&gt; transfers objects between classes based on the frequency of use, charging per transfer. Used objects go to Standard, while infrequently used objects go to Standard-IA. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier:&lt;/strong&gt; long-term data archiving, additional storage at a lower cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier Deep Archive:&lt;/strong&gt; long-term data archiving with access once or twice a year.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  EBS Storage
&lt;/h1&gt;

&lt;p&gt;Elastic Block Store (EBS) represents storage for EC2 virtual machines. It can go up to 16 TiB per disk, offering SSD or HDD support. This is provisioned storage you pay per gigabyte, on a monthly basis. This means that you should try to estimate the amount of storage you need at a given time and only purchase that volume. You can increase the size of your EBS storage later. &lt;/p&gt;

&lt;p&gt;When purchasing EBS storage, the first thing you should decide is whether to use SSD or HDD volumes. SSD volumes are great for regular read and write operations, while HDD is better with wide streaming workloads that require efficient throughput. &lt;/p&gt;

&lt;p&gt;There are several types of EBS storage volumes. You can see a list of them &lt;a href="https://aws.amazon.com/ebs/pricing/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;When choosing the correct volume, the first thing that will probably come to your mind is to dismiss HDD storage, but don’t precipitate with your decision. If you decide on one of the SSD options, regularly monitoring your EBS storage can show you whether HDD would be a better choice, highlighting that you might not need that efficient performance after all. Also, don’t forget to turn off EBS storage volumes you no longer use. This can save you many unnecessary costs. &lt;/p&gt;

&lt;p&gt;Here, we need to mention EBS snapshots as well. EBS snapshots represent a piece of the used space in the EBS storage, not the provisioned storage. They are also charged per gigabyte per month, at a price of $0.05 per GB-month of data stored. When you want to restore a snapshot, you can use EBS Fast Snapshot restore, but at a higher price. &lt;/p&gt;

&lt;p&gt;You can start with the Free Tier that offers 30GB of EBS Storage, 2 million I/Os, and 1GB of snapshot storage.&lt;/p&gt;

&lt;h1&gt;
  
  
  EC2 pricing
&lt;/h1&gt;

&lt;p&gt;EC2 instances are charged per hour or per second while they are running. This means that when we don’t need them, we should shut them down. Here, you’ll also have to pay for the provisioned EBS storage, regardless of whether your EC2 instances are running or not. Finally, you’ll also pay for data transfer out, a price that varies depending on the region. There are also other points of pricing you can find in the &lt;a href="https://aws.amazon.com/ec2/pricing/" rel="noopener noreferrer"&gt;EC2 documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are several types of EC2 payment options: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fec2-pricing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmk0microtica2di3k2co.kinstacdn.com%2Fwp-content%2Fuploads%2F2021%2F03%2Fec2-pricing.png" alt="ec2-pricing-plans"&gt;&lt;/a&gt;&lt;br&gt;
There are about 400 EC2 instances you can choose from. It’s important to choose the right instance family and size in order to be cost-effective. For right-sizing, you can use Amazon CloudWatch, AWS Cost Explorer, and AWS Trusted Advisor. &lt;/p&gt;

&lt;h1&gt;
  
  
  Cost savings in serverless
&lt;/h1&gt;

&lt;p&gt;Serverless computing can save you a lot of time and money. Here are some of the benefits: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No need for server management&lt;/li&gt;
&lt;li&gt;Scale automatically without downtime&lt;/li&gt;
&lt;li&gt;Pay for what you use&lt;/li&gt;
&lt;li&gt;Migrate a large amount of everyday work to AWS&lt;/li&gt;
&lt;li&gt;Save time you can use to focus on your actual product&lt;/li&gt;
&lt;li&gt;Become more agile and flexible
To become serverless on the AWS platform, you can use AWS Lambda for computing, DynamoDB or Aurora for data, S3 for storage, and the API Gateway as a proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Database pricing (RDS &amp;amp; DynamoDB)
&lt;/h1&gt;

&lt;p&gt;When it comes to RDS pricing, the first thing to think about is the instance you choose. The only serverless option is Amazon Aurora. Next, database storage is also an important factor. Obviously, the bigger the database, the bigger the cost. The remaining two factors are backup storage and data transfer between availability zones and storage. &lt;/p&gt;

&lt;p&gt;You can choose one of the following &lt;a href="https://aws.amazon.com/rds/instance-types/" rel="noopener noreferrer"&gt;Amazon RDS instances&lt;/a&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;General purpose (T3, T2, M6g, M5, M5d, M4)&lt;/li&gt;
&lt;li&gt;Memory optimized (R6g, R5, R5b, R5d, R4, X1e, X1, Z1d)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can try Amazon RDS for free and pay for what you use. The payment options are on-demand or Reserved Instances. To estimate your spendings, try the &lt;a href="https://calculator.aws/#/" rel="noopener noreferrer"&gt;Pricing Calculator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As for DynamoDB, you can also pay on-demand or for provisioned capacity. You can see the difference between read and write capacity units depending on the pricing type &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;To affect the costs you’ll have for DynamoDB, use &lt;strong&gt;auto-scaling&lt;/strong&gt;. Auto-scaling uses traffic patterns to dynamically adjust the number of read and write capacity units, which in fact helps with the difficulty to predict DynamoDB workloads. By defining a scaling policy for read/write capacity you only enter the minimum and maximum values for provisioning. With alarms, you can trigger the autoscaling policy to perform certain steps in order to scale up or down. &lt;/p&gt;

&lt;p&gt;To save more, you can purchase reserved capacity units for a period of one or three years. This commitment will allow you to get capacity units at a reduced price. &lt;/p&gt;

&lt;h1&gt;
  
  
  Key takeaways
&lt;/h1&gt;

&lt;p&gt;To choose the right AWS pricing plans, you need to understand your data storage requirements first. The importance, delicacy, and amount of data, as well as other characteristics, will impact the final AWS spendings you’re going to have. &lt;/p&gt;

&lt;p&gt;Then, you need to choose from the six storage classes AWS has to offer, as well as EBS storage volumes. There are also several EC2 pricing plans and choosing serverless computing as an option. Finally, we’ve explained Amazon RDS and DynamoDB pricing. What you choose depends on your application and your data storage requirements. &lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>management</category>
      <category>aws</category>
    </item>
    <item>
      <title>Defining an AWS Cost Optimization Strategy</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Fri, 02 Apr 2021 08:20:48 +0000</pubDate>
      <link>https://dev.to/microtica/defining-an-aws-cost-optimization-strategy-3a5</link>
      <guid>https://dev.to/microtica/defining-an-aws-cost-optimization-strategy-3a5</guid>
      <description>&lt;p&gt;How well does your company handle cloud costs? While you could have spending statistics at your disposal that will reassure you that the production team is under its monthly budget or that your monthly recurring income is on an upward trajectory, this data does not actually mean that you are handling investments in the cloud as well as you should be.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://www.flexera.com/blog/cloud/cloud-computing-trends-2021-state-of-the-cloud-report/"&gt;RightScale and Flexera teamed up&lt;/a&gt; to research the cloud spending habits of companies. What they found was that &lt;strong&gt;35% or even higher of cloud spend is wasted.&lt;/strong&gt; In times when nothing is certain and when a pandemic has taken over the world, companies are very careful with their spending. Saving the resources you normally waste on cloud spend could open new possibilities for product improvement and growth. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CaxfK7Pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/cloud-waste.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CaxfK7Pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/cloud-waste.png" alt="cloud waste report" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To hit you with some more numbers, 61% of companies are prioritizing cloud cost optimization this year, which makes it a number one initiative once again. Another top-three initiative is getting better financial reporting on cloud costs. &lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://microtica.com/blog/7-challenges-with-aws-costs/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;many challenges with AWS&lt;/a&gt; that teams are facing. It is not uncommon to see reports stating that companies are overspending on the cloud.  They are losing tons of money on unused assets and consuming more capacity assets than they need. Rightsizing, arranging, and obtaining Reserved Instances for predictable workloads are some of the practices that AWS users often leverage to reduce their cloud costs.&lt;/p&gt;

&lt;p&gt;However, these options might not be the only solutions. Every year there are new initiatives, tools, and best practices for AWS cost optimization that, when used right, could save you a lot more. There are many reasons why you need an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;. In this article, we want to give a short introduction to what an AWS cost optimization strategy looks like. &lt;/p&gt;

&lt;h1&gt;
  
  
  Identify your costs
&lt;/h1&gt;

&lt;p&gt;Costs and expenses are two different things. Expenses are what is required for a business to continue to operate and costs are associated with the delivery of final products. Costs can be fixed or variable, depending on the production of the company. &lt;/p&gt;

&lt;p&gt;Dividing costs into fixed or variable can help identify where you can spend less. Analyze your requirements: What kind of storage you need? How much? What will your day-to-day operations look like? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CYU2O1Ep--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/cost-vs.-benefit-e1616185424523.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CYU2O1Ep--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/cost-vs.-benefit-e1616185424523.png" alt="identify aws costs" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Much like production lines try to identify which products are decreasing their production and not selling, you can identify projects that are not performing like you’ve anticipated and don’t need the scaling requirements set in the beginning. Decrease those production instances and storages and auto-scale when needed. &lt;/p&gt;

&lt;p&gt;The most important thing here shouldn’t be the costs you cut, but where you can focus the resources to increase growth. This is called strategic cost reduction. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Define which costs are strategically critical to the operation of your business. Everything else can either be decreased or cut completely, as they are non-essential costs. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Define your goals
&lt;/h1&gt;

&lt;p&gt;Always get into something with the end goal in mind. To successfully adopt a cost reduction strategy, making a plan is first and foremost. &lt;/p&gt;

&lt;p&gt;Do a lot of research and analysis on your business and the goals you want to achieve. Specify monthly, quarterly, yearly goals or a definite date that makes sense for your situation. &lt;/p&gt;

&lt;p&gt;You can set up cost avoidance goals by teams. If there are teams that use a lot of computing and storage power, by looking at recommendations for rightsizing you can create targets for the team to optimize the workloads and follow the process. This is more effective than simple waste reduction, as it helps teams make intelligent decisions about the cloud needs.  &lt;/p&gt;

&lt;p&gt;Like any other initiative in the company, there needs to be direction and leadership. Cost optimization should be considered as a strategic move for the whole business. &lt;/p&gt;

&lt;p&gt;But the most important goal here could be to create a cost-effective environment. Development teams should be enabled to understand cloud finance and economics. A good start is to obtain a cloud certification from AWS to be able to have discussions and implement cost management in the company. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Everyone in the company should be aware of the cost optimization goals. Enabling teams to get more insights into cloud economics will create a cost awareness culture, which is the most important goal.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Practice makes perfect
&lt;/h1&gt;

&lt;p&gt;With the specified goals in mind, an execution plan is the logical next step. But with millions of compelling initiatives you can do, how do you prioritize cost optimization recommendations and decide on the best ones for you? &lt;/p&gt;

&lt;p&gt;The most simple way to make a decision can be to look at two parameters: the benefit and the investment.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The benefit&lt;/strong&gt; looks at the estimated potential savings you can get by implementing the recommendation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The investment&lt;/strong&gt; looks at the estimated level of work that is required to implement the recommendation. This can be seen from a time and resource perspective, customer impact, technical risk to the system. &lt;/p&gt;

&lt;p&gt;Here is an example of how you can look at the parameters: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7eZ_H0pS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/benefit-investment--e1616186249526.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7eZ_H0pS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/benefit-investment--e1616186249526.png" alt="parameters" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you assign a score value to each idea you’ll be able to get a prioritization board which will ultimately be the foundation of your implementation plan. It could look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_0QZcZSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/benefit-vs-impact-e1616186391129.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_0QZcZSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/benefit-vs-impact-e1616186391129.png" alt="cost optimization implementation plan" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Prioritizing initiatives will give you a direction and will set things into perspective. You will understand what is feasible for your short-term and long-term goals. This will be the first step toward the actual implementation plan.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Measure and improve your strategy
&lt;/h1&gt;

&lt;p&gt;In order to be able to measure the success of your strategy, you need to define some metrics to guide you. Some cost management metrics that can help you track costs more effectively are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monthly growth – how efficiently your AWS implementation is rising in terms of the total costs&lt;/li&gt;
&lt;li&gt;Provisioned capacity &amp;amp; use – this will help you identify cloud waste&lt;/li&gt;
&lt;li&gt;Amazon EC2 unit and instance expenses – EC2 accounts for a greater portion of your expenses compared to other services&lt;/li&gt;
&lt;li&gt;Expenses for unused resources – this should be something that is decreasing when you have visibility into unused resources &lt;/li&gt;
&lt;li&gt;Data retrieval costs – you should be able to identify how much of your object storage charges are susceptible to data retrieval&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read in detail about the &lt;a href="https://microtica.com/blog/9-kpis-for-measuring-success-with-aws-savings/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;9 KPIs for measuring success with AWS savings here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Establish a continuous cost control by tracking these metrics over time and recognizing patterns for improvement.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Implement tools and dashboards to be able to monitor your performance for your defined metrics. Create a process to review the results against your defined goals and improve your strategy.      &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;In the next article, we'll go through some most common AWS pricing plans and break them down in order to provide a better understanding of AWS costs&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>aws</category>
      <category>management</category>
    </item>
    <item>
      <title>Tips for Kubernetes Cost Optimization</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Wed, 17 Mar 2021 14:32:30 +0000</pubDate>
      <link>https://dev.to/microtica/tips-for-kubernetes-cost-optimization-25ih</link>
      <guid>https://dev.to/microtica/tips-for-kubernetes-cost-optimization-25ih</guid>
      <description>&lt;p&gt;Kubernetes is ruling the container market. According to a &lt;a href="https://www.cncf.io/wp-content/uploads/2020/11/CNCF_Survey_Report_2020.pdf"&gt;CNCF survey&lt;/a&gt;, &lt;strong&gt;the use of Kubernetes in production in 2020 was 93%,&lt;/strong&gt; up from 78% in 2019. Moreover, the survey reveals that the use of containers in production in 2020 was 92%. This figure is up 300% from CNCF’s first survey in 2016. &lt;/p&gt;

&lt;p&gt;Due to the adoption of &lt;a href="https://microtica.com/blog/deploy-your-first-microservice-on-kubernetes-in-10-mins/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=kubernetes_costs"&gt;Kubernetes by DevOps teams&lt;/a&gt; and the open source community’s encouragement, this figure could grow more. And if it stays at present prices, this market share still is a significant portion. This means that even though Kubernetes makes a lot of things easier, challenges will always appear, as the survey confirms. Namely, the problems listed include networking, storage, tracking, surveillance, a lack of preparation, and, of course, cost management.&lt;/p&gt;

&lt;p&gt;Running Kubernetes can be very costly, especially if done inefficiently. When businesses first try to incorporate Kubernetes in their organizations, they usually use the same architecture and setup that performed well with initial research experiments. However, this setup is often unoptimized and companies don’t think about expenses right away. This could save a lot of unnecessary costs and encourage the implementation of good habits from the beginning.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this article, we’ll go over several methods for controlling and lowering Kubernetes costs. Moreover, as &lt;a href="https://www.stackrox.com/kubernetes-adoption-security-and-market-share-for-containers/"&gt;Amazon EKS is the most common container management approach after self-managed Kubernetes&lt;/a&gt;, we’ll offer more actionable advice on Kubernetes cost optimization on AWS.&lt;/p&gt;

&lt;h1&gt;
  
  
  Kubernetes cost monitoring
&lt;/h1&gt;

&lt;p&gt;This is the most logical step towards starting to manage your Kubernetes costs more efficiently. Monitoring should show you how you’re spending your money when it comes to Kubernetes. More importantly, you should identify saving opportunities.&lt;/p&gt;

&lt;p&gt;Cloud vendors offer billing summary that provides information about what you’re paying for.  However, they will usually only include a simple overview that is only slightly useful for multi-tenant Kubernetes clusters. This is inaccessible in private clouds. As a consequence, it’s popular to &lt;strong&gt;use external software to monitor Kubernetes consumption&lt;/strong&gt;. Prometheus, Kubecost, &lt;a href="https://microtica.com/aws-cost/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=kubernetes_costs"&gt;Microtica&lt;/a&gt;, and Replex are some useful tools in this field.&lt;/p&gt;

&lt;p&gt;Choose the tools you’ll use and how you’re going to monitor your Kubernetes costs. Then, start implementing more concrete actions for Kubernetes cost optimization. &lt;/p&gt;

&lt;h1&gt;
  
  
  Limiting resources
&lt;/h1&gt;

&lt;p&gt;Resource constraints that are effective guarantee that no program or operator of the Kubernetes system uses too much processing power. As a result, they &lt;strong&gt;protect you from unwelcome shocks such as unexpected billing changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A container can’t use more than the resource limit you set. The kubelet (and container runtime) implement the memory cap if you set it to, for example, 4GB for a particular container. The container’s runtime prohibits it from exceeding the configured resource cap. When a process in the container attempts to use more memory than is permitted, the system kernel aborts the process with an out-of-memory (OOM) error.&lt;/p&gt;

&lt;p&gt;Developers can enforce limits in two ways. First, &lt;strong&gt;reactively&lt;/strong&gt;, when the system detects a violation. The second way is by &lt;strong&gt;enforcement&lt;/strong&gt;, which means that the system never allows the container to go over the limit. They can implement the same constraints in various ways for different runtimes.&lt;/p&gt;

&lt;p&gt;Limiting resources is crucial, especially if many of your developers have direct access to Kubernetes. They ensure that &lt;strong&gt;available resources are shared fairly&lt;/strong&gt;, reducing the overall cluster size. Without limitations, one person could use all energy. This would prevent others from working, resulting in a need for more computational resources overall.&lt;/p&gt;

&lt;p&gt;However, be careful not to limit your resources without any balance. Engineers and software cannot function properly if resource limits are too low. On the other hand, they are often worthless if they are too high. Some Kubernetes cost optimization tools, like Prometheus and Kubecost, can help you decide the balance with your resources. &lt;/p&gt;

&lt;p&gt;To find out more about limiting resources for containers, check [this page of the Kubernetes documentation}(&lt;a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"&gt;https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/&lt;/a&gt;). &lt;/p&gt;

&lt;h1&gt;
  
  
  Autoscaling
&lt;/h1&gt;

&lt;p&gt;Autoscaling means paying for what you need. That’s why you have to adjust the size of your clusters to your specific needs. You can allow Kubernetes autoscaling to be able to &lt;strong&gt;adapt to quick variations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Horizontal and vertical autoscaling are the two types of autoscaling available. In a nutshell, horizontal autoscaling involves inserting and removing pods depending on whether the load is above or below a specified level. The scale of individual pods balances with vertical autoscaling.&lt;/p&gt;

&lt;p&gt;Both methods of autoscaling are useful for dynamically adapting the usable computational capacity to your real needs. This approach, though, is not necessarily ideal as it does not function with all use cases. For example, when something is requiring computational resources it is therefore not automatically downscaled. &lt;/p&gt;

&lt;h1&gt;
  
  
  Choose the right AWS instance
&lt;/h1&gt;

&lt;p&gt;AWS Kubernetes costs are under a direct impact by the AWS instance developers use to manage Kubernetes clusters. Instances come in a number of different forms, with varying memory and compute resource combinations. Kubernetes pods are the same way, with different resource allocation. The key to keeping AWS Kubernetes costs under check is to make sure &lt;strong&gt;pods stack effectively on your AWS instances&lt;/strong&gt;. The AWS instance should match the size of your pod. &lt;/p&gt;

&lt;p&gt;The scale, number, and historical resource utilization trends of pods all play a role in deciding which AWS instance to use. Applications may have different storage or CPU requirements, which affects the type of instance to use.&lt;/p&gt;

&lt;p&gt;Ensuring that the Kubernetes pods’ resource consumption correlates to the overall CPU and memory available on the AWS instances they use is critical for optimizing resource use and lowering AWS costs.&lt;/p&gt;

&lt;p&gt;Check the Amazon EC2 instance types &lt;a href="https://aws.amazon.com/ec2/instance-types/"&gt;here&lt;/a&gt; and choose the one that suits your needs best. &lt;/p&gt;

&lt;h1&gt;
  
  
  Use spot instances
&lt;/h1&gt;

&lt;p&gt;When it comes to AWS instances, they are available in several billing profiles: on-demand, reserved, and spot instances. On-demand instances are the most costly but have the best degree of flexibility. Spot instances have the lowest price. However, they can be terminated with a 2-minute warning. You may also get reserved instances for a set amount of time to save costs. As a result, the choice of instance form has a direct effect on the cost of operating Kubernetes on AWS. &lt;/p&gt;

&lt;p&gt;You can utilize &lt;a href="https://aws.amazon.com/ec2/spot/?cards.sort-by=item.additionalFields.startDateTime&amp;amp;cards.sort-order=asc"&gt;spot instances&lt;/a&gt; for workloads that you don’t permanently need and that can handle a lot of interruptions. AWS claims that spot instances will help you save up to 90% on your EC2 on-demand instance prices.  &lt;/p&gt;

&lt;p&gt;If spot instances aren’t a choice for your application since it must still run without delay, you may get a discount if you agree to use the services for a fixed period of time. You will get a substantial discount if you agree to a one- or three-year usage term. According to AWS, this could be between 40% and 60%.&lt;/p&gt;

&lt;h1&gt;
  
  
  Set sleeping schedules
&lt;/h1&gt;

&lt;p&gt;No matter if you run the Kubernetes clusters on on-demand, reserved, or spot instances, ensuring that underutilized clusters are terminated is crucial for cost management. You can calculate the expense of AWS EC2 by the period of time you are provisioning them. Even though underutilized instances have a much greater resource impact than necessary, they still cost you the full expense of running an instance.&lt;/p&gt;

&lt;p&gt;To put it simply, if a development team uses a cloud-based Kubernetes environment, they only use it during business hours. If they work 40 hours a week, and the environment is still working the rest of the time, they don’t have to pay for the remaining 128 hours when they aren’t using it. This, of course, won’t be the case in every team, especially if they have flexible working hours, but turning off the environment when no one is working could significantly enhance Kubernetes cost optimization. &lt;/p&gt;

&lt;p&gt;Developers can set this up by automating a sleeping schedule and &lt;strong&gt;waking up the environments only when they need them&lt;/strong&gt;. Setting up this schedule means that &lt;strong&gt;the system will automatically scale down unused resources&lt;/strong&gt;. This guarantees that the environment’s condition is saved. Moreover, the environment will “wake up” easily and automatically when the engineer needs it again, meaning that there is no disruption in the workflow.&lt;/p&gt;

&lt;h1&gt;
  
  
  Practice regular Kubernetes cleanup
&lt;/h1&gt;

&lt;p&gt;If you give engineers full access to build namespaces on demand or use Kubernetes for CI/CD, you can end up with a lot of unused objects or clusters that are still costing you money.  And if you have a sleep mode that decreases computational resources, it is only for momentarily inactive resources, still retaining storage and configuration. That’s why, when you notice that some of your resources have been &lt;strong&gt;inactive for a very long time&lt;/strong&gt;, removing them would be a smart thing to do. &lt;/p&gt;

&lt;h1&gt;
  
  
  Right-size your Kubernetes cluster
&lt;/h1&gt;

&lt;p&gt;Managing a Kubernetes cluster is different for each case. There are various methods for correctly sizing your cluster, and it is important to develop your application for consistency and durability. As a programmer, you’ll frequently need to consider the specifications for the applications you’ll be running on your cluster before building it.&lt;/p&gt;

&lt;p&gt;Right-sizing your nodes is very important when designing apps for scale. A large number of small nodes and a small number of large nodes are two very different things. That’s why the best approach would be to &lt;strong&gt;find the right balance between these two ends&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;However, different requirements of your apps require different numbers and sizes of nodes. &lt;a href="https://betterprogramming.pub/tips-for-rightsizing-your-kubernetes-cluster-e0a8f1093d8d"&gt;Check this article&lt;/a&gt; to find out what size and number you need for various app cases.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tag resources
&lt;/h1&gt;

&lt;p&gt;In any environment, whether cloud, on-premises, or containers, tagging resources is a smart idea. Services are bound to go unnoticed in enterprise Kubernetes environments with numerous test, staging, and development environments. These services become a chronic burden on AWS prices, even though they aren’t used. Companies should use tagging to &lt;strong&gt;guarantee that all services are controlled&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;AWS provides &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html"&gt;a robust tagging scheme&lt;/a&gt; that you can use to mark services belonging to Kubernetes. You may use these tags to stay on top of resources, resource holders, and resource usage. Effective tagging allows you to easily classify and eliminate unused services. You’ll be able to assign costs and view expense breakdowns for various services once these tags are enabled in the AWS Billing dashboard.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The first step in your Kubernetes cost optimization is to create an outline and begin monitoring them. Then, to avoid unnecessary computational resource usage, you can set limits, which would make the costs more manageable. &lt;/p&gt;

&lt;p&gt;Determining the best size for your resources critical for cost reduction, and autoscaling will also help. If you use AWS, you can check their less costly options, like spot instances. Additional steps to remove idle resources include an automated sleep schedule and cleaning unused Kubernetes resources. Finally, adjust pod size and implement resource tagging for even better Kubernetes cost optimization. &lt;/p&gt;

&lt;p&gt;Incorporating these tips into your processes will result in a &lt;strong&gt;cost-optimized Kubernetes system&lt;/strong&gt;. This will save your money for more crucial business operations and product improvements. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>management</category>
    </item>
    <item>
      <title>KPIs for Measuring Success with AWS Savings</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Tue, 09 Mar 2021 07:36:21 +0000</pubDate>
      <link>https://dev.to/microtica/kpis-for-measuring-success-with-aws-savings-4d71</link>
      <guid>https://dev.to/microtica/kpis-for-measuring-success-with-aws-savings-4d71</guid>
      <description>&lt;p&gt;Gathering and reviewing data to obtain useful insights into your organization and assess the performance of your processes is essential. When it comes to AWS savings, it’s necessary to calculate your AWS expense and utilization performance metrics. You should do this just like you measure your operational activities. As &lt;a href="https://www.flexera.com/blog/cloud/aws-costs-how-much-are-you-wasting/"&gt;35% of the cloud spend is wasted&lt;/a&gt;, organizations need to find more effective ways to make the most out of their AWS savings. &lt;/p&gt;

&lt;p&gt;In the previous articles of this series, we discussed &lt;a href="https://microtica.com/blog/7-challenges-with-aws-costs/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=aws_kpis"&gt;the most common challenges with AWS costs&lt;/a&gt;, listed &lt;a href="https://microtica.com/blog/3-compelling-reasons-why-you-need-aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=aws_kpis"&gt;three reasons why organizations need AWS cost optimization&lt;/a&gt;, and made a short introduction to FinOps. In this article, we suggest some cost management metrics that can help you track AWS savings more effectively. Moreover, you can use them as a tool to predict future performance and make cutting decisions. &lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Monthly growth
&lt;/h1&gt;

&lt;p&gt;Observing your monthly growth helps you to obtain a perspective of how efficiently your AWS implementation is rising in terms of the total costs. In order to conform to cyclical trends in your company, we also suggest reviewing these statistics over several periods. For example, aside from calculating one-month growth, also calculate your growth over a period of six months. This will help you understand the trends that your company follows and therefore forecast future growth.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Provisioned capacity &amp;amp; use
&lt;/h1&gt;

&lt;p&gt;Over-provisioning, distributing services with more power than necessary, is one of the principal factors of cloud waste. Many people think of over-provisioning as a state where size did not matter from the days of on-site hosting. However, you actually pay for the capacity you provision instead of what you use. This is a warning that the environment is inefficient and that you should take remedial action to the right size and reduce your cloud expenses if your provisioned capacity is dramatically higher than the actual usage.&lt;/p&gt;

&lt;h1&gt;
  
  
  Amazon EC2 unit expenses
&lt;/h1&gt;

&lt;p&gt;Amazon EC2 powers many AWS workloads. This means that EC2 will account for a greater portion of your expenses compared to other services. As developers usually use EC2 through a variety of workloads, we suggest that they monitor your costs at shorter instances. They can do this on a daily, or even hourly basis. These reports can also help you control pricing models more effectively. Moreover, you could understand how much EC2 does one subscriber cost you, which is a very useful metric. &lt;/p&gt;

&lt;h1&gt;
  
  
  Amazon EC2 instance expenses
&lt;/h1&gt;

&lt;p&gt;New instance generations also support increased processing power and greater cost savings. For this purpose, it can be very effective to describe the EC2 workloads that run on older generations of instances and devise a roadmap to update to the new generations of instances. Setting a concrete goal and taking targeted actions to achieve it can be very useful in optimizing your EC2 usage efficiency and reducing your overall AWS costs. &lt;/p&gt;

&lt;h1&gt;
  
  
  Amazon EC2 usage coverage
&lt;/h1&gt;

&lt;p&gt;There is a range of pricing models that you can use to gain greater savings. At the same time, you could be scaling up your AWS consumption. You can select between Spot instances, Savings Programs, and Reserved Instances that represent a number of different use cases for your EC2 use. Setting a threshold of how much your instance fleet should cover is useful in each case. The goal here is to choose the most optimal model for your particular case. &lt;/p&gt;

&lt;h1&gt;
  
  
  S3 expense per storage class
&lt;/h1&gt;

&lt;p&gt;Another tool you are probably using is Amazon S3, which also plays a big role in AWS savings. This service also has a lot of built-in features to help you optimize your costs.  You should evaluate the different storage groups that your company is using, similar to EC2. Then, customize policies to make an impact on them. Use lifecycle policies to automate your cost savings plan. &lt;/p&gt;

&lt;h1&gt;
  
  
  Expenses for unused resources
&lt;/h1&gt;

&lt;p&gt;Every organization has unused resources they’re paying for. For example, recovery snapshots. But, the amount spent on them is a good indicator of how well the organization is managing finances in the cloud.  There are likely hundreds, if not thousands of other unused resources in most organizations’ cloud environments. They range from idle load balances to unattached block storage volumes. However, in many cases, these are not easy to identify. Therefore, it’s a worthwhile investment to implement a cloud management platform that can bring granular visibility into your entire cloud ecosystem and can actively monitor and identify unused resources that can be terminated.&lt;/p&gt;

&lt;h1&gt;
  
  
  Data retrieval costs
&lt;/h1&gt;

&lt;p&gt;Many companies are conscious that by keeping infrequently used data in cold storage stages, they will save a substantial amount of capital. But, when occasionally retrieved data is accessed often enough due to data retrieval costs, these costs could surpass the avoided amount by putting data in a cold storage tier.&lt;/p&gt;

&lt;p&gt;The intelligent tiering class by AWS partially solves this problem. It can help you get an overview of how much of your object storage charges are susceptible to data retrieval. However, it doesn’t work when you have one zone storage or archived storage. &lt;/p&gt;

&lt;h1&gt;
  
  
  Number of cost spike notifications
&lt;/h1&gt;

&lt;p&gt;As almost all billing dashboards offer a budget limit setup, this means that you’ll get a notification each time the budget is overshoot, or a similar cost rise occurs. &lt;/p&gt;

&lt;p&gt;Sudden cost spikes can be a sign of malfunctioning assets or network breaches. This is why you need to know when they happen in real-time. Look for cloud management systems that, when designing rules, offer both flexibility and immediately warn stakeholders when policies are broken.&lt;/p&gt;




&lt;p&gt;After you’ve created your own dashboard with the specific metrics your business needs, you should work more deeply on evaluating them. For example, metrics about spending and usage can be useful in reporting to the entire organization about the costs of your department. Metrics that refer to value can be compared to your market value drivers to produce KPIs that you can track based on your company goals. It all depends on the specific objectives you’re aiming to achieve. &lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>devops</category>
      <category>management</category>
      <category>aws</category>
    </item>
    <item>
      <title>What's FinOps?</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Wed, 03 Mar 2021 09:14:52 +0000</pubDate>
      <link>https://dev.to/microtica/what-s-finops-4lea</link>
      <guid>https://dev.to/microtica/what-s-finops-4lea</guid>
      <description>&lt;p&gt;As a part of our cost optimization series, we already talked about &lt;a href="https://microtica.com/blog/7-challenges-with-aws-costs/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=finops"&gt;the most common AWS cost optimization challenges&lt;/a&gt; and the &lt;a href="https://microtica.com/blog/3-compelling-reasons-why-you-need-aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=finops"&gt;reasons why every company needs to solve this&lt;/a&gt;. In this article, we’re talking about FinOps as a way to optimize cloud costs.&lt;/p&gt;

&lt;p&gt;To solve their cloud cost optimization problems, some companies develop FinOps practices. FinOps refers to Cloud Financial Management. It is the process of adding financial transparency to the cloud’s variable expenditure model. The goal is to empower teams to balance between speed, expense, and quality.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;FinOps is, at its heart, a cultural shift. In order to facilitate quicker execution, cross-functional teams operate together. At the same time, acquiring more financial and organizational power is also a priority.&lt;/p&gt;

&lt;p&gt;FinOps allows all operating teams to access real-time data they need to impact their expenditure and make wise choices that eventually contribute to successful cloud cost optimization without affecting the performance, speed, and quality of the final product. &lt;/p&gt;

&lt;p&gt;FinOps is all about eliminating obstacles. It enables innovation teams to produce better features, software, and migrations more rapidly. It also allows a cross-functional discussion about when and where to spend. An organization can sometimes plan to cut back on spending, while other times it will decide to spend more. Yet, teams have to know why they make the choices.&lt;/p&gt;

&lt;h1&gt;
  
  
  The FinOps process
&lt;/h1&gt;

&lt;p&gt;According to the FinOps Foundation, the FinOps process has three phases: inform, optimize, and operate. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_-4Rwi0T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/finops-333-01-1536x804.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_-4Rwi0T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/03/finops-333-01-1536x804.png" alt="the finops process" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;inform&lt;/strong&gt; phase is when the company gains visibility over the allocation of its resources. During this phase, the focus is on where our resources have been allocated, what is the state of our budget, and what kind of future awaits with this setup. For wise decisions, the on-demand and dynamic nature of the cloud, along with personalized pricing and incentives, makes it possible for precise and real-time visibility. Based on tags, accounts, or company mappings, the correct distribution of cloud spending enables detailed chargeback and showback. Other interested parties also want to confirm that the focus remains on ROI. At the same time, they want to remain within the budget and predicting spending precisely, eliminating uncertainties.&lt;/p&gt;

&lt;p&gt;The second phase is &lt;strong&gt;optimize&lt;/strong&gt;, when the team has to identify cost optimization opportunities and act accordingly. The goal is to spend resources more efficiently, avoiding waste. To help teams optimize, cloud vendors offer multiple tools. The most costly is on-demand power. Cloud services provide discounts on commitments that usually require complicated estimates for making reservations to facilitate advanced reservation preparation and improved participation. Furthermore, departments and companies can optimize the setting by turning off any inefficient usage of energy by modifying and automating them.&lt;/p&gt;

&lt;p&gt;The third and final phase is the &lt;strong&gt;operate&lt;/strong&gt; phase. Here, organizations act according to their set goals and track their progress. The team should follow the business goals together with the speed, the efficiency, and the expense of processes. The company will mark significant progress in its cost optimization activities only if they develop an efficient FinOps culture that keeps all stakeholders on track with all happenings. &lt;/p&gt;

&lt;p&gt;In the next part, we’ll elaborate on what a FinOps culture actually means. &lt;/p&gt;

&lt;h1&gt;
  
  
  The FinOps culture
&lt;/h1&gt;

&lt;p&gt;To drive acceptance, FinOps needs some cultural engineering. You may be the sole authority on FinOps, but it only applies to a certain extent. To make it more transformative, you have to find other members of both the Dev and DevOps teams engaged in FinOps, and you need to make it simple for them to implement it.&lt;/p&gt;

&lt;p&gt;FinOps is, at its heart, a cultural activity. The most powerful way for teams to control their cloud expenses is through this operations strategy. Teams can leverage FinOps to execute more efficiently while maintaining financial and organizational power.&lt;/p&gt;

&lt;p&gt;In combination with the transition to flexible cloud investment, distributed decision-making enables technology teams to work successfully with finance and business teams to make smart decisions that accelerate continuous optimization. FinOps processes enable these teams to work at high speed while boosting the economy of the cloud system. This change helps and encourages teams on the edge and allows team members to engage in the process of increasing productivity, maximizing usage, and reducing investment in any area of the organization.&lt;/p&gt;

&lt;h1&gt;
  
  
  The role of a FinOps
&lt;/h1&gt;

&lt;p&gt;To implement FinOps practices more effectively, many teams decide to hire a FinOps lead who will take care of everything related. Here’s what this person would do: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost control: Consider where expenditures derive from.
Output tuning: Invest and redistribute money where you really need to.&lt;/li&gt;
&lt;li&gt;Timely choices related to cost management: You can anticipate and make decisions with real-time knowledge.&lt;/li&gt;
&lt;li&gt;Forecast, schedule, and secure resources: You can predict the resource needs for the future, find discounts or allocate resources from other places using the information regarding cloud data consumption.&lt;/li&gt;
&lt;li&gt;Align IT and finance teams: Teamwork guarantees the correct budget, optimizes expenses, and reduces cash loss.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s really difficult, though, to find a person who has all the skills required. Where the budget permits, the safest choice is to devote a multidisciplinary team of financial and IT individuals. Another solution is to get employees with a very good financial background, supported by already established departments in the company. This is a great example of &lt;a href="https://www.infoq.com/articles/every-devops-team-needs-finops-lead/"&gt;how PerimeterX implemented FinOps in their organization&lt;/a&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;A company’s dedication to a FinOps strategy is long-term. The process does not always go as expected, particularly for new enterprises that are still working out the commercial side of the cloud. Any organization will effectively transition to FinOps if it meets the right guiding principles, has the right attitude, has the right employees, and has the right cloud optimization software.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>management</category>
      <category>cloudnative</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Why AWS Cost Optimization?</title>
      <dc:creator>Sara Miteva</dc:creator>
      <pubDate>Wed, 17 Feb 2021 09:19:14 +0000</pubDate>
      <link>https://dev.to/microtica/why-aws-cost-optimization-4op9</link>
      <guid>https://dev.to/microtica/why-aws-cost-optimization-4op9</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is the second in our series on AWS cost optimization. In this series, we’ll introduce the challenges with AWS costs. We’ll also offer actionable recommendations on how to solve them and perform efficient AWS cost optimization.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  Check out our comprehensive guide on creating an &lt;a href="https://microtica.com/aws-cost-optimization/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=cost_optimization_pillar"&gt;AWS cost optimization strategy&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cloud cost optimization can help a company maximize its business value. A cost-efficient company has &lt;strong&gt;financial stability and success&lt;/strong&gt;, and with that, great potential to &lt;strong&gt;accelerate its business growth&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The term cost optimization is commonly used in the same context as the term &lt;strong&gt;cost reduction&lt;/strong&gt;. The main difference between them is that cost optimization nurtures a culture where the company is aware of its costs at all times. They take it seriously by having dedication and responsibility on an ongoing basis. &lt;/p&gt;

&lt;p&gt;With the pandemics and the current economic situation, companies have to make cloud cost savings their number one priority. Here are the top three reasons why you need AWS cost optimization:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kypCq3BI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/JNhn7jjIllOpwXCk354BjKDqoi_KM7OoyCvx5OX23ZxN4pBRuv2mNQk44Oe5k2F_5lb2_fK6MUvt-CNnSlVuSQJxrBAzuRem5pgJI2QzZ932n9Rnk4wpT1V_GobgdjX_rCHxfvM" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kypCq3BI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/JNhn7jjIllOpwXCk354BjKDqoi_KM7OoyCvx5OX23ZxN4pBRuv2mNQk44Oe5k2F_5lb2_fK6MUvt-CNnSlVuSQJxrBAzuRem5pgJI2QzZ932n9Rnk4wpT1V_GobgdjX_rCHxfvM" alt="top cloud initiatives for 2020" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Remove unused and idle resources
&lt;/h1&gt;

&lt;p&gt;Unused resources are one of the most important reasons why you need AWS cost optimization. Having a strategy for cost optimization can help with identifying left-over resources. There is always the possibility of someone creating a new instance of a resource and forgets to remove it afterward. Or, they scale up for testing purposes and then leave it running. All these unused resources can cause &lt;strong&gt;cloud waste that can not be identified that easily&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;When they are not used, turning off idle resources can be one of the most effective ways to save on cloud cost. This is mostly applicable for &lt;strong&gt;non-production environments used for development, testing,&lt;/strong&gt; and &lt;strong&gt;staging&lt;/strong&gt;. These environments don’t need to run all the time, since they are mostly used during working hours. Defining custom sleep schedules for resources after work hours or on the weekends can &lt;a href="https://microtica.com/aws-cost/?utm_source=devto&amp;amp;utm_medium=referral_link&amp;amp;utm_campaign=why_aws_cost_optimization"&gt;make your AWS bill significantly lower&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Plan spendings
&lt;/h1&gt;

&lt;p&gt;Being able to pay-as-you-go is one of the many great features of the public cloud. But it also makes it very difficult to be aware of your cloud needs and necessary budget. By looking at your cloud spending history, you can start to &lt;strong&gt;identify patterns of usage&lt;/strong&gt;. Information like &lt;em&gt;which AWS account costs you the most&lt;/em&gt; and &lt;em&gt;which services create the most spendings&lt;/em&gt; can give you an idea of how you should allocate your budget resources. &lt;/p&gt;

&lt;p&gt;With that knowledge, you can make better forecasts for future cost spendings. You can create an action plan and avoid surprises in your AWS bill.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--10oCvLbG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/02/pasted-image-0-e1613479069215.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--10oCvLbG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2021/02/pasted-image-0-e1613479069215.png" alt="organizational spend on public cloud" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;
As organizations are increasing their cloud use, they can’t manage their cloud costs effectively. This chart from the &lt;a&gt;Flexera 2020 State of the Cloud Report&lt;/a&gt; shows that the public cloud spends of companies go over budget by an average of 23%.



&lt;h1&gt;
  
  
  Identify saving opportunities
&lt;/h1&gt;

&lt;p&gt;Staying on top of your spendings makes it easier for you to stay on budget, especially when you visualize them. Monitoring cloud cost can help you &lt;strong&gt;identify opportunities for improvement.&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Of course, this will not be a one-time activity. Since cloud environments are live matter, they will constantly be changing. Once you find saving opportunities like &lt;strong&gt;removing unused resources&lt;/strong&gt;, new ones will appear. This is why it’s best that the process is automated. Leveraging automation to scan working environments to get notifications for identified waste or getting saving recommendations can optimize your overall operations.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final words
&lt;/h1&gt;

&lt;p&gt;As cloud usage continues to accelerate, it’s essential that we are cautious and responsible with our cloud costs. Companies that have implemented cloud optimization strategies see great benefits in many other aspects. Agile businesses &lt;strong&gt;increase revenue, decrease operational risk&lt;/strong&gt; and &lt;strong&gt;improve team productivity&lt;/strong&gt;. Putting the time and effort into defining a cost optimization plan, setting up responsibilities, and implementing tools to save money now will have significant results in your budget. &lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
