DEV Community

Cover image for Designing Cost-Optimized AWS Architectures: A Comprehensive Guide
fvgm-spec
fvgm-spec

Posted on

Designing Cost-Optimized AWS Architectures: A Comprehensive Guide

In this tutorial, we will delve into the art and science of designing cost-optimized AWS architectures. Cost optimization is a crucial aspect of AWS cloud computing, and this guide will provide a blend of theoretical knowledge and practical exercises to help you create architectures that not only meet your performance requirements but also optimize your AWS bill.

Table of Contents

  1. Introduction to Cost Optimization in AWS
  2. Understanding AWS Pricing Models
  3. Design Principles for Cost Optimization
  4. Architecting for Scalability and Elasticity
  5. Use Case: Cost Optimizing Client's Architecture
  6. Conclusion

1. Introduction to Cost Optimization in AWS

Understanding the importance of cost optimization is the first step towards building cost-effective AWS architectures. We'll introduce the concept and highlight its significance in cloud computing.

2. Understanding AWS Pricing Models

Before diving into cost optimization, it's crucial to comprehend AWS pricing models. AWS and most of the cloud based services, offers the pay-as-you-go approach, that allows users pay only for the individual services they need, without requiring long-term contracts or complex licensing. So they can only pay for the services they consume, and once they stop using them, there are no additional costs or termination fees.

In the image below are shown the basic pricing principles of AWS pricing model, every single user is charged only for the resources they consume, so there are no upfront fees and no minimum commitment or long-term contracts required.

was pricing model

3. Design Principles for Cost Optimization

One of the Pillars of an AWS Well Architected Framework, is Cost Optimization. The design principles of this Well Architected Framework Pillar includes:

Adopting an efficient consumption model, design principle, is directly related to the pay-as-you-go pricing model previously described, where users pay only for the computing resources they consume, and increase or decrease it depending on business the requirements.

Another remarkable design principle is Measuring the overall efficiency of your workloads and the costs associated with deliveries, then the results of these analyses can be used to understand the gains you make from increasing output, increasing functionality, and overall reducing cost.

Another principle is referred to as Stop spending money on undifferentiated heavy lifting, as AWS does the heavy lifting of data center operations like racking, stacking, and powering servers. It also removes the operational burden of managing operating systems and applications that traditionally were done in managed services.

Last but not least, AWS offers the opportunity to Analyze and attribute expenditure, by making it easier to accurately identify the cost and usage of workloads, which then allows transparent attribution of IT costs to revenue streams and individual workload owners.

If you feel like exploring further, you can always review the Official Documentation

4. Architecting for Scalability and Elasticity

Scaling your architecture based on demand is a fundamental part of cost optimization. Solution Architects should ideally design robust and scalable applications, having always in mind a must-have building principle, which is "build today with tomorrow in mind", it means that apps need to cater to current scale requirements as well as the anticipated growth of the solution. This growth can be either the organic growth of a solution or it could be as a result of a merger and acquisition or a similar scenario.

This topic is closely related to the AWS Reliability Pillar, which states that an application designed under this principle should achieve reliability in the context of scale, modularity, and constructed based on multiple and loosely coupled building blocks (functions).

The image below shows how should be built highly scalable and reliable application using a microservices architecture.

Scalable Modular Applications

Scalable modular applications

5. Use Case: Cost Optimizing Client's Architecture

So let's assume we working for a client that is facing an issue related to storage and needs to migrate their warehouse comprised mainly of images and scanned documents. They need a solution that guarantees elasticity, scalability, and durability.

Their amount of data goes from 60 to terabytes of data, so they we want to shift to cloud store and their first task is to determine what is the best service to use and secondly, how they'd go about transferring these objects to the cloud.

We can offer our client a few options that could cover the main prerequisites they talked about before (elasticity, scalability, and durability) they can perfectly be Amazon RDS, DynamoDB or perhaps Amazon Aurora?

storage_options

So far so good, so now the requirement is to determine the nature of the objects to be stored if they are structured or unstructured. objects of various sizes and formats. Our client needs to store all sorts of image formats, including TIFF, JPEG and PNG, in this case Amazon S3 comes up as an ideal option for these purposes, as it provides highly durable, highly available elastic object storage. If you are at least familiar with this versatile and commonly used AWS service, chances are that you have seen this table before:

s3_storage_classes

Another remarkable feature, is that we won't need to provision the Amazon S3 storage level first and the services will scale upwards to accommodate any future volumes of data that we need to provide. That means our clients, could whenever they want store and retrieve any type of object in Amazon S3, no matter if they are images, files, videos, documents, or archives. It scales quickly and dynamically so you don't need to pre-provision space for objects if you store them.

There are three storage classes for Amazon S3, as seen in the table above, and each of them provides different levels of availability, performance, and cost which makes it a highly flexible solution.

Compared to choosing a relational database like RDS Service for storing objects with Amazon S3, would be unlikely to be efficient and the cost might outweigh the benefit of using a relational database on a cloud service.

Now that the implementation is almost done, we can start to look at ways of optimizing our storage further, starting with Amazon S3 Standard Class, which is the most widely used class (used for what is called, "hot data"), as it provides the maximum durability and availability combination of the storage classes, and in fact, it's hard to find a use case that doesn't suit this class at all!

The first point is determining the right class that will support our data, now we should look at how to import the data. As AWS Solution Architect, may think about using a device called AWS Snowball

snowballs

The reason for choosing it and how this device works, and its main features are a little out of the scope of this article, if you want to explore further just check the very first source of truth, AWS Official Documentation

Infrequent Access Storage Class, is the second storage class analyzed, if Standard Class is best suited for hot data (data which is accessed very frequently), then Infrequent Access Class is ideal for warm data, which can be understood as data that is not accessed so frequently, or no so frequent as the hot data :) We still would need the durability, but the availability in retrieval time is not a critical mission. So if the requirement is to retrieve assets regularly, Standard Class would end up being probably more efficient than Infrequent Access.

Now the third storage class is Amazon Glacier, it is essentially (as its name suggests) the one referred to "cold storage" or cold data.

glacier

Glacier suits archives and taped library replacements and anything where we may need to keep a version of a backup or an archive to meet compliance requirements.

So if accessing data using Standard Class it's very, very fast and highly available, with Infrequent Access Class, it means less availability. With Glacier, it can take two to five hours to request an archive be retrieved. So it's perfect for anything where we don't have a time constraint. Objects stored in Glacier have a relatively long return time.

Right at this point, is when Lifecycle Rules come into play, to shift assets from Standard or Infrequent Access storage classes to Amazon Glacier. If we have older backups and taped archives, we might wanna consider importing those directly to Glacier and also setting up rules to shift older assets to Glacier after a period of time.

There is another cool Amazon S3 tool that can help which will help you observe your data or access patterns over time and gather information to help you improve the lifecycle management of your Standard and Standard IA storage classes, it is called Storage Class Analysis Tool. This analysis tool will observe the access patterns of your filtered data sets for 30 days or longer to gather information for analysis before giving you a result, then the analysis continues to run after the initial result and updates the result as the access patterns change, which is a really useful feature.

6. Conclusion

In this comprehensive guide on designing cost-optimized AWS architectures, we've covered the principles, practices, and practical exercises to create architectures that not only meet your performance needs but also save on costs.

As you continue your journey with AWS, remember that cost optimization is an integral part of building efficient and sustainable cloud solutions. Happy architecting!

References

Top comments (0)