DEV Community

Cover image for Deep dive into AWS Block Storage
Matt Lewis for AWS Heroes

Posted on

Deep dive into AWS Block Storage

Background

With little background in block storage, I wanted to challenge myself and learn more by completing the Storage Learning Plan on AWS Skill Builder. This was my first time using AWS Skill Builder, and I was hugely impressed with this as a free resource. The learning plan was comprehensive, and I also received a digital badge after passing the exam at the end. This post summarises my key findings from the learning plan and associated resources, so I can look back at them when I need too.

What is block storage?

With block storage, the available storage is formatted into pre-defined continuous segments (or blocks) of equal size. Each block is an individual piece of data storage. Large pieces of information to be stored may be broken down and written to multiple blocks. There are two main block storage solutions for Amazon EC2, which are EC2 instance store and Amazon Elastic Block Store (EBS).

Instance Store versus EBS

Some EC2 instances provide access to non-persistent block storage in the form of an instance store. These are raw disks that are directly attached to the host. This means the instance store volume must be specified when the instance is launched. They cannot be detached or attached to another instance. The storage is non-persistent, as it is available only for the duration of the instance. Data in the instance store is lost if the instance stops or is terminated, or if the underlying disk drive failures. It is typically used as temporary storage of data that changes frequently, such as buffers and caches. For any data that must be quickly accessible and requires long-term persistence, the recommendation is to use durable block storage in the form of EBS.

EBS provides highly scalable, available and durable block storage. To give an idea of scale, for Prime Day in 2022, Amazon provisioned an additional 153 petabytes of EBS storage which handled 11.4 trillion requests per day and transferred 532 petabytes of data per day.

EBS might conceptually be thought of as a hard drive attached to your instance. However, it is not a single hard drive but a massively distributed system that is accessed over the network, so you should think of it more as block storage as a service. When EBS was first launched in August 2008, an EBS volume was contained on a pair of servers. Nowadays, volumes are split into shards. This means they are no longer constrained by individual storage nodes. Since the shards are smaller, there is less data to replicate, and this means it is faster to replicate in a failure condition, improving the overall durability of the service. Each AZ has its own configuration manager, which stores the authoritative mapping of volumes to hosts. The communication between the EC2 instance and the back-end storage subsystem now makes use of equal-cost multi-pathing. This means there are multiple paths data can take between two endpoints. The transport protocol is known as SRD (scalable reliable datagram), and is designed to route traffic easily around failures in the network.

Certain EC2 instance types can be launched as EBS-optimised instances. These provide additional, dedicated bandwidth for Amazon EBS I/O, minimising any contention with other traffic from your instance. These EBS-optimised instances are designed for use with all Amazon EBS volume types.

EBS is designed to support most workloads at any scale. To do that, it provides a variety of volume types to match to your workload needs.

EBS Volume Type Comparison

For workloads on AWS, there are two main categories of EBS volumes:

  • Solid state drive (SSD) - these are optimised for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
  • Hard disk drive (HDD) - these are optimised for large streaming workloads where the dominant performance attribute is throughput

General Purpose SSD

General Purpose SSD volumes cover the gp2 and gp3 volume types. Both these volume types are designed to provide single-digit millisecond latency. This is the true round-trip time of an I/O operation or the elapsed time, between sending an I/O to Amazon EBS and receiving an acknowledgement from Amazon EBS that the I/O read or write operation is complete.

General Purpose SSD

The gp2 volume type is the first-generation general-purpose implementation. It provides a baseline 100 IOPS for smaller volumes, with the performance scaling linearly at 3 IOPS per GiB of volume size, up to a maximum of 16,000 IOPS. This means you would need to provision a larger volume size to increase the number of IOPS. As EBS is charged for storage that is provisioned, this could lead to significantly higher costs.

In December 2020, AWS announced the general availability of a new general purpose storage volume type known as gp3. AWS designed gp3 to provide predictable 3,000 IOPS baseline performance and 125 MiB/s, regardless of volume size, at costs up to 20% lower than gp2.

gp3 volumes are estimated to be appropriate for up to 80% of all workloads

With gp3 volumes, you can provision IOPS and throughput independently, without increasing storage size.

Provisioned IOPS SSD

Provisioned IOPS SSD volumes cover the io1, io2 and io2 Block Express volume types. Both io1 and io2 volume types provide single-digit millisecond latency, whilst io2 Block Express provides sub-millisecond latency. These are the highest performing storage volumes designed for critical, IOPS-intensive, and throughput-intensive workloads that require low latency.

Provisioned IOPS SSD

Note that to achieve the maximum throughput of 1,000 MiB/s, the volume must be provisioned with 64,000 IOPS and attached to an instance built on the Nitro System.

The io1 volume type is the first-generation provisioned IOPS implementation. In August 2020, AWS launched the next generation β€˜io2’ volume type. This provides two big benefits over the io1 type for the same price:

  • Higher Durability - The io2 volumes are designed to deliver 99.999% durability, making them 2000x more reliable than a commodity disk drive
  • IOPS - The io2 volumes offer 500 IOPS per GiB compared to the 50 IOPS per GiB of io1 volumes. This means you need only 10 GiB of provisioned storage to support 5000 IOPS using io2, but you would need 100 GiB of provisioned storage to get that performance on io1.

In July 2021, AWS announced the general availability of io2 Block Express volumes that deliver up to 4x higher throughput, IOPS and capacity than regular io2 volumes, and are designed to deliver sub-millisecond latency.

If you need more than 16,000 IOPS, more than a 16 TiB data volume, or more than 1,000 MiB/s throughput, this is the only game in town.

HDD

HDD-backed volumes include Throughput Optimized HDD (st1) for frequently accessed, throughput intensive workloads and the lowest cost Cold HDD (sc1) for less frequently accessed data.

HDD

We already understand that IOPS is a measurement for the number of input and output operations per second. The maximum amount of data that counts as a single I/O is dependent on drive type.

I/O size is capped at 256 KiB for SSD volumes, but capped at 1024 KiB for HDD volumes. This means that a single 1024 KiB operation on an HDD volume would count as four separation operations on an SSD volume. This highlights the fact that SSD volumes are optimised for small or random I/O operations, whereas HDD volumes are designed for large sequential I/O operations.

Whilst SSD volumes define performance in terms of IOPS, these HDD volumes define performance in terms of throughput. Throughput is the amount of data that can be read or written per second. This is most important when it comes to large sequential files. This makes HDD volumes most suited to workloads like log processing and big data.

Additional Features

The learning plan touched on far more than just the different volume types available. To have a chance of passing the exam at the end, you need to know about some of the interesting features and capabilities available for EBS, including the following:

Elastic Volumes

With EBS Elastic Volumes, you can make changes as needed to optimise your EBS volumes. EBS Elastic Volumes let you change the volume type, dynamically increase the volume size, and modify performance characteristics. For gp3 and io2 volumes types, you can dynamically change the provisioned IOPS or provisioned throughput performance settings for your volume. A best practice is to start with a smaller volume size that meets your immediate requirement, and then expand volume size with Elastic Volumes. This helps keep costs lower, as you cannot dynamically reduce volume size. To reduce the volume size, you need to first create a smaller volume and then migrate data to it using an application-level tool.

Fast Snapshot Restore

EBS snapshots are a backup of all the data you have on EBS volumes, which are backed up in Amazon S3. EBS provide incremental snapshots, so that you only get billed for changed blocks. EBS snapshots are crash consistent. This means they discard whatever is in memory, and only contain the blocks of completed I/O operations. When there are multiple volumes attached to the same EC2 instance, the snapshot is taken at the exact same moment on all volumes. EBS snapshots can also be shared across accounts and copied between accounts and regions

Previously, the challenge with launching or restoring from a snapshot was the time taken to go through the data transfer process from S3 to an EBS volume. This was an asynchronous process that took time in the background. Often, the blocks of data would be lazy loaded only as when they were queried.

Fast snapshot restore reduces the time to restore from a snapshot by as much as 6x. Once you tag a snapshot as FSR, you can create volumes from the snapshot with access to all the blocks without any initialisation or pre-warming. It works on the basis of a bucket based credit system with credits refilling over time, based on 1 TiB per hour of snapshot size. It allows volumes to be restored simultaneously.

EBS Multi Attach

Another relatively new feature is EBS Multi Attach. This can be enabled on an EBS provisioned IOPS io1 or io2 volume to be allow a volume to be concurrently attached to up to 16 Nitro-based EC2 instances within the same AZ. Each attached instance has full read and write permission to the shared volume.

Encryption

EBS provides support for encryption of data at rest, in transit, and all volume backups. It is fully integrated with Amazon KMS, which includes bringing your own customer managed key (CMK).

Amazon EBS encrypts your volume with a data key using industry-standard AES-256 data encryption. The data key is generated by AWS KMS. The plaintext data key is encrypted by AWS KMS using the same KMS key prior to be passed down to EBS and stored with the volume information. All encryption occurs on the servers that host EC2 instances, so the data is encrypted as it moves between EC2 instances and EBS data and boot volumes. All snapshots, and any subsequent volumes created from those snapshots using the same AWS KMS key share the same data key.

Once encryption is enabled, all data is encrypted, including:

  • Data at rest inside the volume
  • Data moving between the volume and the instance
  • Snapshots created from the volume
  • Volumes created from such snapshots

Summary

Although block storage is not an area I've needed to be close too, the learning plan on AWS Skill Builder provided a comprehensive background that can only improve decision making and understanding for anyone involved with block storage for EC2 instances. Even if block storage is not for you, there are now plenty of other learning plans with associated digital badges available, including the recently released Serverless badge. As a free resource, you have nothing to lose, and I'd recommend anyone with an interest in AWS to check it out.

Top comments (0)