DEV Community

Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - What's new with Amazon EBS (STG202)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - What's new with Amazon EBS (STG202)

In this video, Aaron Chen and Sooraj Prasannan present Amazon EBS innovations from the past 12-18 months through the lens of a fintech company's three key workloads. For transactional systems, they announce IO2 Block Express latency improvements to sub-500 microseconds, global availability across all regions, new CloudWatch metrics including average latency and throughput, instance-level exceed checks, and FIS latency injection for chaos testing. For analytical warehouses, GP3 volumes now offer 5X IOPS (80K), 2X throughput (2GB/s), and 4X capacity (64TB), while R8GB instances deliver 150Gbps bandwidth. For developer environments, new EBS volume clones enable instant cross-volume copying without snapshots, time-based AMI copy ensures predictable 15-minute transfers, and provisioned rate for volume initialization reaches 300MB/s. Recycle Bin protection extends to EBS volumes for accidental deletion recovery.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction to Amazon EBS and Customer-Driven Innovation

All right. Hi everyone. Welcome. Welcome to Vegas. Welcome to the session at 5:30 in Mandalay Bay. You guys must love us. Alright, today we're going to take you through this journey of what's new with Amazon EBS. We're going to cover the innovations we have done in the past 12 to 18 months, and we're going to have a little bit of fun. So my name is Aaron Chen. I'm a Principal Product Manager at Amazon EBS. Along with me is my friend Sooraj Prasannan. He's also a Sr. Manager, Product Management at Amazon EBS. We'll invite him down to the stage and I'll walk you through the first half of the journey and then we'll swap.

Thumbnail 60

Before we get started, let's talk about EBS a little bit. EBS stands for Elastic Block Store. At its most basic, it's a network-attached persistent storage for your EC2 instance. The storage is connected to your virtual machines over the network, so it's disaggregated from your EC2 instance in terms of the data lifecycle. You are able to stop and restart your instance, you are able to reboot your instance, and your volume and your data will still be there. That's why we're able to offer different volume types because it's attached over the network. You have your IO2 for mission-critical workloads. You have your GP3 for general-purpose workloads.

EBS also provides the capability of backing up your data by snapshotting. A snapshot is backed by Amazon S3 to provide high durability, so in case of an issue, you are able to start from your snapshot and go back to your production environment. Between volumes or between volumes and snapshots, you want to move data. This can help you with different workloads such as instant boot, test and data environment creation, or disaster recovery. That's what we call data services. So EBS consists of those three pillars.

Thumbnail 170

One thing that's very important to know is that EBS is not just a provisioned disk. It's not only offering the block storage volume, but behind the scenes, it's actually a vast distributed system and it's maintained and operated by our EBS engineers. With that kind of system, we are able to offer you different levels of performance, durability, and different features. In terms of workload, EBS is a storage that serves a variety of workloads, all the way from the most mission-critical workloads that power your ERP and CRM, to all the boot volumes that boot your EC2 instance, or to the indexing layer and the caching for your analytical reporting environments. If you have ever spun off an EC2 instance or ever run anything on Amazon Web Services, chances are you have used EBS.

Thumbnail 200

Thumbnail 230

Thumbnail 240

Thumbnail 260

Like all Amazon services, all the innovation we do is deeply rooted in customer needs and use cases. EBS is no exception. So today we are looking at EBS innovation through the lens of a customer. Let's use this rapidly growing fintech company as an example. They have their core business as an operating database for their real-time transactional processing. They also have an analytical warehouse so they can use that to understand what's happening with their business with daily and weekly reports, generating insights, and analyzing their business. Lastly, to provide new features and new experiences to their customers, they need to build and test, and that's why they have a developer environment, so they are able to continually bring more innovations to their customers in a more controlled environment. So we're going to talk about our innovations in these three aspects.

Performance Excellence for Mission-Critical Transactional Systems

Let's start with the transactional processing systems. These are your most mission-critical applications handling their incoming customer traffic. For our fintech customer here, their customer experience is directly correlated to the performance of the underlying system, which is why their business is critical. These applications are typically backed by relational databases, and those relational databases run on EC2 as your compute layer while EBS serves as the persistent storage layer.

Thumbnail 310

Thumbnail 320

Thumbnail 330

Thumbnail 340

Thumbnail 360

These relational databases usually require consistent sub-millisecond latency. This is because applications trigger multiple combined I/Os , and when they come together, the storage requirement involves handling multiple transactions or millions of transactions. Additionally, because they wanted to rapidly grow and expand globally, they need infrastructure-level parity worldwide so they can simplify their infrastructure. They are able to run their applications without any modifications with the same architecture everywhere, which takes a lot of operational burden off them. Lastly, because this system is very mission critical, the customer needs to constantly ensure that their system has high resiliency. As a result, they need observability to understand the performance of their systems and whether it is meeting their expectations, and they need to continue testing if the system's resiliency is good enough and if the failover mechanism will operate as expected to maintain high availability.

Thumbnail 400

Let's unpack all three of these pillars. Let's start with the performance metric, which is millions of transactions or consistent sub-millisecond latency. To deliver the desired experience where customers don't experience lag when they run something, these applications require a storage layer that has very low latency. Imagine you are clicking a button, submitting a form, or making a purchase. These actions usually trigger a ton of correlated I/Os that are either reads or writes. All of us would hate the spinning circles and waiting for something to process.

To ensure customers get an instant response on these things, each storage I/O needs to have very low latency, ideally sub-millisecond. To ensure that experience is actually very consistent, your storage layer needs to deliver sub-millisecond latency or very low latency in a very consistent manner. If you don't, it might lead to lag or even application unavailability, which could lead to loss of sales or loss of revenue, so there is also real monetary impact. That is why at EBS we recommend you run very critical workloads on IO2 volumes. IO2 volumes provide the best, lowest P99.9 I/O latency among the major cloud providers.

Thumbnail 510

We further tightened our latency guidance from sub-millisecond average latency to sub-500 microseconds. Beyond that, we also added clarity to how our IO2 outlier latency differentiates from our general purpose volumes. Now customers are able to understand the difference and make the right price-performance trade-off. So we have covered the performance pillar. The next pillar is global infrastructure parity. As our fintech company grows into different places, they also want to develop in those new regions worldwide. They want the same infrastructure available across regions, which will reduce operational burdens and bring the data closer to their customers, so they can achieve lower latency and offer the same experience everywhere.

Thumbnail 550

Thumbnail 570

Building Resiliency Through Observability, Chaos Testing, and Data Protection

As a result, we expanded IO2 Block Express volumes to all commercial and GovCloud regions, meeting the customer needs. Our fintech company here is able to serve their customers in the same way everywhere. Now that we have talked about performance and global parity, the next pillar is high resiliency. For an application to be highly resilient, it starts with one thing, and that is measurement. We often hear customers asking questions like: I have an EC2 EBS environment, how do I monitor the performance? Do I have enough headroom? We just got an issue. Is it an application issue or an infrastructure issue? AWS can help with this. You want to provide observability, and observability helps customers observe and understand their performance and understand where things can go.

Thumbnail 610

Over the past couple of years, Amazon EBS has been on a journey to improve visibility into volume performance. In 2024, we launched a metric for average latency in CloudWatch. This provides customers the ability to understand where the performance bottleneck exists. This year we launched IOPS and throughput metrics in CloudWatch so that customers can monitor and track performance trends and optimize. These are the regular metrics. We also launched another set of metrics called exceed checks. Last year we launched the volume level exceed checks, and this year we launched the instance level exceed checks. These exceed checks are zero or one metrics that if your driven IOPS or throughput reach beyond your system's entitlement, it will turn into one and you know that's where your bottleneck is. With those signals, it tells you right away whether the performance degradation is because of the entitlement of your system or something else.

Thumbnail 700

If your bottleneck is on the volume side, you are able to use Elastic Volumes to tune up the performance and release that constraint, or you can modify the instance size. Now customers say, I can measure the impact. What's next? I can see the average latency, but how do I know at what time my system will get disrupted and if my standard operating procedure will work. So we're introducing chaos testing. It's not an unfamiliar concept because it is necessary today to build a resilient and reliable system by proactively providing tests and finding those vulnerabilities so you can fix them.

Chaos testing is something that you do in a controlled environment because you don't really want to do it on your production. When you do that, you intentionally simulate a failure in this environment so you can uncover hidden weaknesses or dependencies. Once you realize that, you can fine-tune the process to improve recovery times and test your standard operating procedures to ensure the system fails over to the other side correctly. That is very critical for modern complex and distributed systems.

Thumbnail 770

AWS Fault Injection Simulator, or FIS, is the service just for that. It is a managed service that enables you to run fault injection experiments on your workloads. If you think about EBS faults, they manifest themselves as slow I/O or stuck I/O usually. In 2023 we launched an EBS action in FIS called stuck I/O. So you were able to test if a volume was stuck and monitor if the timeouts and alarm functions are working as expected. Well, this year we are introducing more tests with latency injection. So you are now allowed to inject latency into your system. Think of it as simulating a gray failure. We offer four different types of templates so you can run the test, such as sustained latency, increasing latency, intermittent latency like a spike, and then decreasing latency. Of course, you can also specify and build your own custom template by inputting the read and write mix and introducing latency to read or write.

Thumbnail 860

With the performance tested and observed with the impact on mitigated and the understanding operating procedure in place, the other part of the protection to achieve resiliency is to prevent data loss. Customers sometimes delete volumes by accident or even with malicious intent. That is very important because those kinds of disruptions can result in financial loss or sometimes even damage to your reputation. So having a data protection strategy is critical and ensuring the data can be restored in a timely manner after incidents is very important.

Thumbnail 900

Thumbnail 930

We launched Recycle Bin for EBS snapshots a couple of years back, and now this year we have extended that support to EBS volumes. To use Recycle Bin, you are able to set a retention period, so when you delete a volume, your volume goes right to the bin. Within the retention time, you are able to perform a recovery action so the volume goes back to production. Those recovered volumes are immediately available and you are able to attach it to your EC2 instance and resume your workload. The volume will retain all the tags, all the encryption status, and permissions that you specified for it before.

Thumbnail 960

For volumes that are not recovered and the retention period has elapsed, they are deleted permanently when the retention period expires. You can create these retention rules by a single volume or by group of volumes, just using tags to figure out. Coming back to our FinTech customer, we covered the first pillar, which is the real-time transaction system. Now let's welcome Suraj back to the stage. He'll talk to you about the other two.

Thumbnail 1020

Powering Analytical Warehouses with Enhanced GP3 Volumes and Network Performance

Thanks, Aaron. Aaron spoke about the real-time transactional systems, which are extremely important to run your business. But if you think about the analytical warehouse system, that's where you get to figure out what's happening with the business, where the business is going, and what you do about it. So it's extremely important. In the way I think about it, you have so many different stakeholders in the business, and an analytical warehouse aligns everyone on a single source of truth.

That's why your analytical warehouses are so important. If you think about some of the questions you would ask from analytical systems, for financial data, it would be: How are my margins trending? How is my revenue trending? If you're thinking about sales questions, you would be thinking about how my products are selling in each of the different regions. If you're thinking about marketing, you're running different kinds of campaigns, and you want to know how the conversion is for some of these campaigns and if something is going great, how do you double down?

If you are doing product development, you're trying to figure out how are my customers using my product currently, what is the sentiment, and what are the complaints you're hearing so you can drive better product decisions. On the inventory side, you're trying to figure out: Do I have the inventory to support my customers? So there are different kinds of questions you want to ask from the analytical warehouse system. As you would imagine, it has a bunch of data which streams into these systems, and then you have to run analytics and querying on it to get the insights.

Thumbnail 1100

Thumbnail 1130

It's a very resource-intensive system, and we'll get into what kind of product storage requirement it drives and what we have done in 2025 to make the analytical warehouse system much better to run on EBS. As I said, you have all kinds of data streaming into the analytical warehouse systems, and typically the storage is S3. You have your OLTP system streaming data through your change tracking into the systems. You have your sales data, finance data, marketing data, and IoT data. Think about all the kinds of data which is streaming into S3. For that data, you would think it's going to be in all different kinds of formats. So if you have to run a query, you have to normalize it to some extent so you can run queries on it. That's one place where EBS comes into play when you have to do the ETL jobs on these huge data sets, which is sadly necessary because you need a faster processing layer.

Thumbnail 1150

Thumbnail 1160

The other place where EBS comes into play is when you're running queries. It could be your nightly queries, it could be ad hoc queries, or it could be your nightly reporting jobs. Customers need a fast storage layer there to speed up those queries. The first requirement is that data has to be extracted quickly from S3 into the EBS layer so you can run the ETL jobs or you can run your queries. The second requirement is if you think about these queries, if you're thinking about aggregates and joins, there's a lot of intermediate data which gets generated which you necessarily don't want to persist long term. It's throwaway data which you don't want to keep, but you want it absolutely during the time when the query is running.

Thumbnail 1190

So if you lose that data or if the volume fails, then you'll have to rerun the whole query again. It's important that EBS has enough capacity to manage that query layer. And lastly, if you're aware of how these data processing layers work, typically there is a Spark framework, so they take a query job and they split it into smaller jobs. That's one way to think about it. And then you make sure that these jobs are split into smaller sections so they can run in parallel and also it's better for failure management.

If a small job fails, another job spins up in its place, and another EBS volume attaches. Customers typically use containerized environments for that kind of processing. Now if you take a step back, we need high throughput and large capacity. You can definitely get that by combining and raiding different kinds of EBS volumes, but the hard part is that in containerized environments, RAID is not natively supported. RAID in general is cumbersome, but in containers, if you're dealing with a lot of systems, it's going to be extremely hard to support raiding of volumes. So that drives the requirement that from a single data volume, I need very high throughput and very large capacity.

Thumbnail 1250

That's what motivated us to deliver GP3 volumes with larger performance and higher capacity. It reduces the need for you to RAID volumes in a containerized environment. We increased IOPS by 5X, going from 16K to 80K. We doubled throughput from 1 GB to 2 GB, and we also increased the size from 4 to 16 terabytes to 64 terabytes, which is a 4X improvement in size. Aaron also spoke about OLTP systems, and this also helps in your OLTP environment. Typically, customers have a production environment running on IO2 BX volumes for latency consistency, but test environments don't need that kind of performance, so they run on GP3. However, they need the same size. If you have a 64 terabyte IO2 BX volume before this launch, you would be striping 4 GP3 volumes to get that same test environment, which is very cumbersome if you're snapshotting that environment.

Thumbnail 1320

Now with this launch, you can have a 64 terabyte IO2 volume and a 64 terabyte GP3 volume, which simplifies your test workflows and improves developer agility. That was one of the features we launched for analytical workflows, and it also helps OLTP. Now, as you know, EBS is network-attached storage. Just having a very large volume with a lot of performance is not going to solve the problem. You also need to make sure that the network pipe from the compute to the EBS fleet is big enough so that customers can drive that throughput and performance to the volume. This has been a journey for us because the bigger the pipe, the lower the total cost of ownership of an EBS solution. Customers who have very large scale workloads also benefit because it helps them land even larger workloads on top of us.

Thumbnail 1370

Thumbnail 1380

We started with 100 Mbps, sharing with EC2 networking back in the day. With Nitro enabled, all Nitro instances have a dedicated EBS pipe with much larger performance. In 2019 and 2020, we realized that customers need a separate SKU with very high networking performance. So we launched R5B and also R5N. R5B had 3X higher performance than the generic instances at that point in time. Then we launched R6N to follow that up, and this year we launched R8 GB. It has 150 GBps of bandwidth, which is 18.75 gigabytes of throughput from a single instance to the EBS fleet and 720K IOPS. With that, if you have an OLTP system, they are very latency sensitive and very tightly coupled systems. To deliver that latency sensitivity requires scale-up compute, which means a lot of CPU, a lot of memory, and a lot of storage performance needed from a single instance.

Thumbnail 1450

That helps with that for sure, and you can land even larger and larger workloads on AWS and on EBS. The second thing is that in the previous slide, these query jobs get broken down into smaller jobs. When you're running these analytical jobs, you're using smaller instances, and generally when you are using instances, you don't want your CPU or memory to be stranded because that's the most expensive part of an instance. By making the pipe bigger, we ensure that that doesn't become the bottleneck, so you have better total cost of ownership of running an EBS solution. That was for analytical workflows and helps OLTP as well. We use that as an example because it's easier to explain in the analytical framework. Now the next thing and the last one we want to discuss is the developer environment. If you think about a developer environment, the customer has a production environment which is taking live traffic, and they want test environments for development purposes.

Accelerating Developer Workflows with Volume Clones and Predictable Initialization

The requirements are that the production environment should not be impacted when you're creating a test environment. You also want to make sure that the data you're transferring to the test environment is not sensitive, so you don't want that data to be widely shared. Those are two requirements on the production system when you're trying to create a copy of your volume. The other workflow and requirement on the test environment is that they want the data to mimic the production environment, so there's this push and pull from both sides because you want similar rich data so you can work on that and simulate real world examples. If you are a person managing infrastructure for developers, there are a couple of things you want to think about.

You want to ensure that the test environment is as close to the production environment as possible. And if you have a distributed development team, you want everybody to work off the same code base. Those are the two key considerations.

Consider our FinTech customer. They have a real-time transaction system with live traffic, which presents a challenge. They want the ability to create a copy of that environment as quickly as possible, debug the issue, identify the root cause, fix the issue, and deploy the fix. We will see how we made that the developer experience simpler this year with the features we have launched, as they dig deeper into both of those workflows.

Thumbnail 1540

Thumbnail 1550

The first workflow is a copy workflow. You have a production environment with EC2 instances and a bunch of EBS volumes attached, and you create your test environment again with instances and a lot of volumes attached. In the production environment, you do not want any kind of impact when you are creating these copy workflows. You do not want to copy sensitive data either. That is a requirement on the production side.

On the test side, you want to create more frequent copies of the data so it is as fresh to the production environment as it can be. You want to be very close to that. The other requirement is that on the production side you might have volumes using IO2 BX volumes, and on the test side you might not need IO2 BX because it is not very latency sensitive. It does not make sense to have IO2 BX there. So you might want the ability to mix and match volume types. Your source could be IO2, and your destination could be GP3.

Thumbnail 1590

Those are the requirements which that workflow drives. Then if you think about the second workflow, consider this FinTech company which is growing globally. They have developers all around the globe, and they want to make sure everybody is working off the same code base. In that case, what customers do is they create an Amazon Machine Image, a golden AMI, which basically underneath the covers is creating snapshots. The AMI creates snapshots, and then they copy that AMI across different regions, and copying the AMI triggers copy snapshots underneath the covers. When you create a dev environment, you are launching an instance from an AMI, which means a volume is getting created from a snapshot.

You can see there are multiple moving parts here. Customers trigger these workflows sometimes daily, sometimes weekly based on the development cycle. For both of these workflows, they require predictability and speed because if it is not predictable and fast, it is very cumbersome to manage. That is what we heard consistently from our customers.

Thumbnail 1650

Let us look at what we did in each one of them. We launched EBS volume clones this year. Previously, when customers had to create a volume clone, they would use snapshots as the intermediate layer. You would create an environment with IO2 BX volumes. They would create a snapshot of those IO2 BX volumes, which means pushing data from the EBS volume to EBS snapshots, which is on S3. Then you are creating a volume from that snapshot, which means again pulling down data from the snapshot down to the EBS volume. It takes time to do that, and it is cumbersome.

After the volume is fully initialized, you would attach those volumes to your test environment and instances to trigger the test workflows. With volume clones, what we have done is we have taken the snapshot out of the picture. Now you copy data directly from the EBS source volume to the target volume. It is a single API call, and you can create a performant volume with a single API call instantly. You can start with an IO2 BX volume on the left-hand side and create a GP3 volume on the right-hand side for your test environment, and we make sure that the production environment and production volume do not get any kind of performance impact.

Those are the requirements from the previous slide, and we have addressed them. As I said, the previous workflow required you to know when the snapshot was completed and when the volume was fully initialized. That was very cumbersome. With this, we have improved developer efficiency. If you recall, the other requirement was to create copies more frequently. Now that copies do not take a lot of time to create, you can create more copies frequently to keep your test environment as close as possible to the production environment, which improves the quality of your code.

Beyond this workflow, we also see customers use cloning for an emergent use case. We kind of knew it could be used for blue-green deployments. Blue-green deployments are where customers would have two live environments. There is a blue environment which is the existing code base, and they have a green environment which is the new code base. Over a period of time, they start moving traffic over, and if everything goes fine, they switch over and the green environment becomes the new environment. If something goes wrong, they can quickly flip back to the blue environment. It reduces the risk when you are deploying software and also makes it much more seamless.

Previously, customers used to use snapshots for that. If you are using snapshots, as I said, it takes more time. As a developer managing the blue and the green, you are managing the green environments for a much longer time because you have both of those things going at the same time. Now with copy, you can just have the new environment created.

You still have to manage the flip over and all that is still on you, but it is much easier and simpler to do than before. So it minimizes the developer environment and improves developer agility, which was one of the main motivations for doing this. That is volumes to clones.

Thumbnail 1830

The other workflow was the golden image which you had and you want to copy it across regions to developers all across the world. As I said, there are multiple steps in that workflow. One of the things you have to do is you create an AMI, then you copy the AMI across different regions, and then you create volumes from it. If a developer is coming online in the morning, you do not want the developer to deal with a system which is not fully performed. That is a waste of their time and waste of resources. So that is one constant feedback we heard.

So copy AMI workflow by design is the best effort workflow. If you are copying an AMI across different regions, based on how busy the regions are, your speeds will vary. So a developer in the Dublin region would get something faster than a developer in the Frankfurt region, for example, US EU West and US too. We use the internal names. So we launched time-based AMI copy last year in February so you can give us a time as low as 15 minutes and we will make sure that across all regions these AMIs get copied in 15 minutes. So that makes it much easier. It is predictable that you know the AMIs are going to be there.

Thumbnail 1900

Thumbnail 1920

The other part of the workflow is creating and launching an instance from an AMI, and that triggers pulling data from the snapshot to the EBS volume. So that workflow is also not predictable. It is best effort based on EBS snapshot, how heavy the load is on the snapshot, on the EC2 fleet and on the EBS volume fleet. So here again we launched provisioned rate for volume initialization. You can go up to 300 megabytes per second. We ensure that the data gets pulled down from S3 at that speed. So you know predictably when the volume will be fully ready and you can give that to your developer and he can be sure that the volume is fully performed.

Thumbnail 1930

Along with that we also launched the ability to know that all the data from the snapshot is fully pulled into the EBS volume so we know that the volume is ready to take production traffic. That is available for all volume types, not just in the context of this feature. So volume initialization is one of our basic workflows. If you think about AMI launches, it triggers the volume initialization workflow, and we have seen customers who have a single AMI launching hundreds of instances use this feature because it gives them predictability that all those instances are ready to take on the workload and production traffic.

We see customers who are copying environments across availability zones because the copy volume is an in-AZ copy of the volume. If you are copying volume of production environments across AZs, this is again useful to make it predictable and fast. And lastly, if you are recovering from a disaster, you are recovering from a snapshot to a volume. You have strict recovery time objectives. This ensures that you are able to meet that recovery point objective. So it is a core primitive for us and we are going to invest in that to make it faster. So that speeds up a lot of workflows.

Thumbnail 2000

Summary of EBS Innovations and Session Wrap-Up

Just summarizing all the things we spoke about here, Adam spoke about the real-time transaction system, extremely latency sensitive. So we reduced the IO2 Block Express volume latency definition from sub-millisecond to sub-500 microseconds so you can handle even stricter latency guidance. We want to make sure that all your customers can benefit from this latency. We have IO2 worldwide available so you can develop the same stack across the whole world.

We launched metrics performance exceeded checks, so volume exceeded checks we launched last year and instance exceeded check we launched this year. So in a stack it is very easy to figure out if the bottleneck is the instance or is the volume. We launched average latency metrics last year and average I/O throughput this year, so you can trend your performance over a period of time, put in alarms, and figure out when something is going wrong.

On the resiliency side, we talked about latency injection with FIS, Fault Injection Simulator. We had launched pausing I/O in 2023 that basically simulates if you have timeouts in your stack, how does it propagate from the storage data upwards. If you have certain alarms, how does that propagate. This year we launched the ability to inject latency into your data path. You can use four templates, but you can also create your own template with read/write mixes and how much microseconds of latency you want on the read part and on the write part.

And lastly, if customers accidentally delete data, delete a volume, delete a snapshot, we had recycle bin protection for snapshots. We extended that to volumes as well. On the analytical warehouse system, we need a lot of speed to pull data from S3 into the EBS fleet. So we have larger and faster single volume GP3 volumes. REGB with 1,000 megabytes per second as a bandwidth and 720K IOPS, as I mentioned, also helps your OLTP systems. The larger and faster GP3 makes test environments much more simpler to use and the REGB enables you to land even larger workloads on top of EBS.

For dev environments, to improve developer agility, we launched volume clones. We can instantly create a performance volume from another EBS volume in the same AZ. Provisioned rate for initialization and time-based AMI copy makes having a golden image and copying it across the globe much more easier. Initialization status helps you understand when your volume is fully ready and you can put that in production to take on latency sensitive applications. So that is the bunch of features we launched this year.

Thumbnail 2150

These are the other EBS sessions if you are interested in. The last one is about networking and EBS workloads if you like that, and he will be there for that session, so you can go for that one. And then Adam and I will be hanging out here. So if you have any questions, let us know. Please take the survey. We read all the surveys. If you can put notes in there, that is great because we look at that and figure out how to make the session better next year. But thanks for coming. I know it is a very late session, so we appreciate you all being here. Thank you.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)