🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.
Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!
Overview
📖 AWS re:Invent 2025 - Sustainable computing for climate solutions (AIM417)
In this video, AWS Solutions Architect Guyu Ye and Technical Account Manager Nitin Pathak discuss sustainable high-performance computing for climate solutions. They explain how climate challenges are complex due to interconnected systems and urgent timelines, requiring efficient data processing and simulation capabilities. The session covers AWS's sustainability shared responsibility model, achieving a PUE of 1.15 in data centers, and the Well-Architected Framework's sustainability pillar. Key HPC solutions presented include AWS ParallelCluster, AWS Batch, and AWS Parallel Computing Service, along with Amazon FSx for Lustre and Graviton processors offering up to 60% better power efficiency. A case study from University of Oxford's APAL project demonstrates detecting brickkilns across 1.5 million square kilometers using satellite imagery and machine learning, achieving 80% infrastructure cost reduction and processing 1.2 million images while significantly reducing carbon footprint through optimized compute resources and Open Data on AWS.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Climate Change Complexity: Understanding the Challenge and the Urgency
We're ready. Hello everyone. Hope your re:Invent event has been great. My name is Guyu Ye. I am a Senior Solutions Architect supporting AWS Social Responsibility and Impact modernization. Hi everyone, I'm Nitin Pathak. I'm a Technical Account Manager at AWS and part of the sustainability team for almost two years now. Thank you so much for joining our session on sustainable computing for climate solutions.
In today's session, we're going to first talk about climate challenges and the complexity of solving climate challenges. We're also going to talk about why HPC, or high performance computing, is important in solving climate challenges. After that, we'll dive a little bit deeper into some patterns in terms of sustainable HPC workload, and then follow up with a case study from the University of Oxford to see how they're applying some of these practices into their HPC workload. Then we'll summarize with some next steps.
But before we get started, by show of hands, how many of you have been impacted by climate change over the past few years? Looks like everyone has been impacted. All right, Nitin, I saw that you got impacted. Can you share your experience and how it felt and what did you have to do to adapt?
Certainly. So I live in a part of India called New Delhi, and every winter we have a concern when the temperature starts dipping. There's a lot of smog in the air, so it's low visibility, dusty particles, and the air quality index gets really high. The pollution gets really high, so we have to live with air purifiers all the time, and it becomes problematic. People have problems with breathing and going outside. So this becomes a challenge almost every year, and we hope there's some solution for that.
Yeah, and it probably doesn't help with the health equity and different types of socioeconomics. It certainly doesn't. I'm sure some of you are experiencing some of the similar challenges as well with climate change. We're having more and more extreme weather events. For example, our summers are getting hotter and lasting longer, which not only reduces yields for major crops, but also leads to death. Our hurricanes are lasting longer and intensifying faster, which makes it very difficult for disaster recovery efforts. In some parts of the world, increased precipitation leads to flooding. In other parts of the world, decreased precipitation leads to drought, which impacts food security. And now we have wildfire. Wildfire season is lasting longer and it's leading to damages that's not only in property but also leads to air pollution.
I have the opportunity to work with many customers that are tackling some of these challenges. At AWS Social Responsibility and Impact, our vision is to improve lives every day. And we believe that cloud and AI technologies are powerful tools to help build solutions that tackle some of these challenges in health, education, and climate.
But building climate solutions is difficult. In fact, climate challenge is very complex because first of all, the climate system itself is complex. As we know, there are different components in the climate system. We have the atmosphere with the air. We have the hydrosphere with the water, the lake, the ocean. We have the cryosphere with the ice. We have the lithosphere, the land, and we have the biosphere, the living beings. We are part of it. And these five components interact with each other in very complex ways, which makes it very difficult to understand the impact of any intervention.
Apart from the complex climate system, we also have a very complex human society. This is not just socioeconomic, political, cultural, but also because climate change is a global issue. Communities and regions are impacted very differently. So there are also disagreements in terms of the best way to adapt and mitigate. And then the urgency makes solving climate challenges even more difficult because now we have to evaluate the long-term and short-term trade-offs.
Scientists tell us that we have a limited window to make unprecedented headway in order to keep our global temperature rise to below 1.5 degrees Celsius. And as we are taking action now, we also have to evaluate the impacts in the long term to make sure that our efforts are sustainable.
Technology as an Enabler: Data, Storage, and Compute Requirements for Climate Solutions
The good news is that we are making progress. We're making our buildings greener. We're taking more into consideration of sustainable building practices from building design, the type of materials that we're using to build our buildings, the type of energy we're using and when and how we're using the energy. When it comes to energy, we're using more renewable energy, wind, solar, geothermal, hydro, and more. When it comes to food supply, we're applying more sustainable agriculture practices so that we can reduce water waste as well as improve soil health. And we're also electrifying our transportation sector so that we can reduce the harmful emissions from conventional combustion engines.
In order for all this to work effectively, we need technology. Technology enables and accelerates climate solutions. Now, as solutions architects or researchers or innovators, as we are translating climate challenges into technical requirements, we really need our technology stack to do three things and do three things very well. First of all, in terms of data.
Data is important in any system, but it's extremely important in building a climate solution because, as we mentioned earlier, climate systems are extremely complex. So in order to understand the complex system, we need data from different sources. This could be data from supply chain data to understand carbon footprint. This could be satellite imagery to understand vegetation index. This could also be the sound of the ocean to understand ocean noise pollution. So these data needs to come from various sources and some of them can be very large. But the ability to acquire high value data is extremely important.
And secondly, we need a storage option, a storage solution, and because our data is from different types of sources and can be very large, our storage needs to be able to accommodate that. And also because these data can be large, we also need to be able to retrieve the data easily and efficiently as well. So now we have the data and we store the data. Now we can actually understand the patterns behind the data.
So the third component is compute. We need our technology stack to be able to do large scale data processing and simulation so that we can understand the patterns behind the data. And I hear that Nitin has some recommendations. And we have a technology stack that can help you to build solutions towards finding the solutions for your climate problem.
High Performance Computing for Climate Research: Power and Sustainability Challenges
Have you heard of High Performance Computing, a show of hands, and have you been using it? Certainly it is an amazing and efficient way of how technology can interact with one another in order to provide us an infrastructure and a setup that will help us conduct our research, that will give you solutions for complex problems and specifically problems related to climate research. I've been an HPC admin myself for a large part of my career. And I know this is extremely useful. However, there's one problem with it.
An HPC grid usually consists of hundreds and hundreds of servers. That would mean it can consume loads of energy. Setting this up is never easy. Even though it is the right tool set for our solution, it might not be the easiest and the most sustainable way to do it in an on-premises setup. Thus, there needs to be a fine balance. That's where AWS comes in and sees how it can help you for the next one.
AWS Shared Responsibility Model: Sustainability of the Cloud and the Well-Architected Framework
When it comes to sustainability, we also follow the shared responsibility model. You might already be familiar with the shared responsibility model of security, but in terms of sustainability, AWS is responsible for sustainability of the cloud. That means our global infrastructure, including our data centers, our electricity supply, servers, cooling, building materials, and so forth. Customers, you are responsible for sustainability in the cloud, so that's your workloads. Region selection, how you're using your data, how you're storing your data. If you're working on machine learning workloads, that could also mean what type of model you're choosing, how you're building your model, how you're deploying your model and running your inferences and scaling your model.
Now diving a little bit deeper into sustainability of the cloud and our AWS global infrastructure. AWS continues to relentlessly innovate our infrastructure to build the most secure, performing, resilient, and sustainable cloud for our customers worldwide. We have 120 Availability Zones in 245 countries and territories across 38 geographic regions. According to an Accenture study, workloads on AWS are up to 4.1 times more efficient than on-premises. How we're able to achieve that is through our data center design, which is enhanced for tomorrow's AI workloads.
One way we evaluate data center efficiency is to use this metric called Power Usage Effectiveness, or PUE. A lower PUE indicates a more efficient data center, and a PUE of one is perfect. As you can see, our AWS global infrastructure has a PUE of 1.15, which outperforms on-premises average as well as public cloud industry standard. We did that through optimizing our data center design with 46% reduced mechanical energy as well as 35% less embodied carbon in the concrete that we use.
With that, by just bringing your workloads to AWS, you are already benefiting from the carbon reduction. Depending on the region where your workloads are running, you will receive different benefits in terms of energy efficiency. By optimizing your workloads on AWS, which Nain will talk about in a little bit, you will be reducing carbon footprint even more. Now AWS has been playing its part in order to make things more efficient for you, the infrastructure more efficient for you by various means.
Now, we also don't leave it there. We'll provide you enough material so you can improve your workloads and go by it. This thing you might have heard of from your solutions architects, and it's a very common thing from AWS, the Well-Architected Framework. It was built on five major principles: operational excellence, performance efficiency, reliability, security, and cost efficiency. When we added the sustainability pillar to it in 2021, our focus was to have a standard framework which our users can utilize and implement the same strategies to their workload, to the architecture, to continue to improve and reduce their overall carbon emissions.
When we go around following these practices, reviewing our workloads against these best practices, against these pillars, we are able to identify and improve our overall workload setup. The sustainability best practices lie around these six foundational pillars, which we call the cornerstone for any of the architecture which we will define for you. When we are thinking about sustainability of our workloads, we'll talk about the region selection, alignment to demand, data, hardware and services, our software and architecture, and most importantly, process and culture. That is what we do within ourselves.
In addition to that, we also have a Customer Carbon Footprint Tool. We have been using our billing dashboard regularly to visualize how much costs are ongoing. This also helps you to get a reflection of where your carbon emissions are also going for your workloads. This will give you a breakdown based on the regions and the services and overall usage per account so that you will have clarity of the kind of things you have done for your carbon emissions, where it's going, and what you probably should consider in order to go forward and have a proper measuring and reporting tool.
Now with this, we have the ability to visualize what we have.
Building Blocks for Sustainable HPC: Data Management and Storage Optimization on AWS
Now we go back to HPC and how AWS can help you get to a sustainable high performance computing setup. The building blocks for sustainable high performance computing setups are as follows. I'm showcasing this divided into three common layers. The first one is the core framework, the foundation where the storage will be in terms of file systems and where our network will be. We have Amazon FSx for Lustre for Lustre-based computing and Elastic Fabric Adapter that will help you with high-speed network interface for your HPC cluster.
Then we have the compute and scheduling services, and we have options of AWS ParallelCluster, AWS Batch, and AWS Parallel Computing Service. I'll do a deep dive on this to show you where we can utilize each of these. Finally, how do we visualize and interact with the system? We have Amazon DCV and the Research and Engineering Studio on AWS that can be utilized for you to visualize and work with your workflows.
Now we go back to the aspects which Guy spoke about, the key areas and avenues that we want to focus on as we build solutions for our climate research. The first one is data. Data is pivotal for any kind of research, specifically when we are doing climate research. We need a lot of data from various sources that can give us the most accurate results. If that data is not good for us, it will not give us good enough results.
It will be stored across various sources, coming from various places, and it will be populated in different locations, and we need to utilize the same. There's one concern with this. The data which we might need for research will be in exabytes overall, and that will cost emissions. Storing all that data will cost emissions. That means we will have our storage resources consuming power with all this data stored, coming from various sources, storing in our storage systems, and using it over and over again.
Another aspect to this is data transfer. Most of the customers we have worked with who want to reduce their carbon emissions tend to neglect the emissions coming from data transfer from one source to a destination. That's a key aspect of carbon emissions which is occurring, and we need to reduce the same. The key thing which will come out from there is that the data is reusable, and we need to find out ways on how to do it.
Well, AWS has found one way of how to do it for you by creating Open Data on AWS. Now this platform has data coming from various sources, research facilities, various types of research, be it nonprofit organizations, scientific organizations, multiple places correlated to the data in a single place, and using AWS Data Exchange, you have access to them. Along with that, one good thing is it is ready to use on the fly. You don't need to download it to your storage devices or transfer it to your storage systems. You don't need to move it to your S3 buckets. You can keep it there, so the major chunk of data is just available. You can start to use it on the fly, use it for research purposes, and get on with your development part.
Once we have the data part sorted, next is the storage. While we conduct our research, we will require file systems, and we'll also require a lot of artifacts which will develop or spin out from our research, and how do we manage this? As those artifacts will again be of a huge volume, we need to first identify data access patterns, and we need to ensure that we are able to reduce our usage in that front as well. While we're using Amazon S3, we specify that there are lifecycle policies that we can certainly utilize.
Now these lifecycle policies help in order to move data from used very frequently to not used at all, and even from there, further deleting them when there's something that is not being used. Understanding this aspect, considering the raw data and the source data is available for us and the compute can be readily available for us, we don't necessarily have to store unnecessary data, and we can get rid of it to ensure that we are keeping our costs in check and keeping our emissions in check as well.
Next part is the file system. Now for an on-premises high performance computing setup, we require a file system which is very fast, very efficient, and we have to replicate something similar for our AWS services as well. That's where Amazon FSx for Lustre came into the picture.
It provides great benefits such as intelligent tiering, which moves data from one type of storage tier to another. It will help you to reduce your emissions. It will also help us with lifecycle management, automated lifecycle management, so that you can clean up data as well. And the data archival along with the integration with S3 will help you move your artifacts in the right place, right position as well.
Overall, when it combines together, Amazon FSx for Lustre provides high performance file system for your high performance computing and can reduce your storage footprint as compared to standard data center by up to 50%. That's a huge differentiation in terms of sustainability, in terms of benefiting from carbon emission efficiency.
Compute Solutions for Climate Research: AWS ParallelCluster, Batch, and Parallel Computing Service
And the final component, compute. Now this is where AWS has been really doing a lot of innovation and making strides on how we can benefit the power of cloud in order to conduct the research for high performance computing. Like I said, a normal cluster will take up to hundreds of servers at least to get started for various workloads.
We have, I spoke about three solutions. We'll start with AWS ParallelCluster. Now if you're coming from an HPC background and you have your OpenMP and MPI-based workloads and you want to replicate the same on cloud, this is a good starting place for you. It will provide you the scheduler that will integrate with our auto scaling mechanism, with our clusters, with our instances that can scale up and scale down based on requirements. And we have seen that by using AWS ParallelCluster, it can reduce carbon footprint by up to 57% from a standard HPC workload. So that means it is efficient. It integrates well with AWS services such as Lustre, EC2, EFA, and NICE DCV for visualization purposes and gives you a good platform for you to conduct research. If you already have your workloads already built for on premises, you can just bring them on the fly and start using them.
The next part is AWS Batch. AWS Batch can be extremely efficient for your high performance computing scheduling system, provided if you have loosely coupled workloads or your workloads are already on containers or you're building something on containers, and you want to run them on a parallel computing part. This will give you a provision of having a job scheduler that will do the scheduling part itself, and it will also help in reduction of compute time by up to 60%. And as we reduce the compute time, we will be able to see that the resources are used for a shorter period of time. That means your efficiency increases in terms of your emissions.
And the last one. Let's say you're just getting started. You're just a researcher who does not have any experience of any kind of HPC or a container workload, and you want to get started towards building a carbon intelligent orchestration system. In this scenario, AWS Parallel Computing Service can be really useful for you as it does all the heavy lifting work for itself. You don't have to worry about what kind of compute you want to keep in place. You will not have to worry about how you will do scheduling. You just build your job, build your research algorithm, your codes and stuff, and it will just be deployed for the best usage in the most efficient way that will help reduce your wait time. It will be easy to set up and prioritize. You can just focus towards your research and not worry about the infrastructure part. So all of these combined will give you a core area of the compute platform that can be utilized for research.
Next, we'll talk about the instances. AWS has been working a lot on the instances which are focused towards high performance computing and accelerated computing aspects. We have instances which are HPC optimized, that will be efficient while we are conducting research and running our jobs for HPC-based setup. And also for accelerated computing where we might require graphical intensive workloads as well, like an example which we'll probably show, we'll have that showcase as well where graphical based results need to be also managed. So we have instances which are developed for the same as well.
Guy initially spoke about sustainability of the cloud, and AWS has been doing a lot of work on this front. When it comes to efficient silicon, and that means the chip design, we come from the bottom up, we start working on the chip design aspect as well, so where we can improve on that. And our three key chips which have been utilized, one Graviton, which might have been the most common for any compute purposes, they're extremely power efficient. They're up to 60% more power efficient than a standard EC2 of a similar family.
For a lot of our research we require model training, model optimization, and for that we have our Inferentia and Trainium. And Inferentia was specifically built for giving you better performance per watt in terms of the utilization. That means as we grow to newer models of Inferentia, it keeps improving in terms of power efficiency for each and every time it conducts any inference and optimization.
From Inferentia 2 to Inferentia 3, the improvements continue. Tranium for any training purposes can give up to 25% better energy efficiency than any standard EC2 instance of the same kind. As we keep developing newer versions, the focus always remains on how Tranium can be used for training your models at a faster pace and in a more efficient manner as well.
Now I spoke about all these materials. How will it look like when you are really building it all into play? How will all of this come together? I'll give you a few sample diagrams in case you want to utilize AWS ParallelCluster, AWS Batch, or AWS Parallel Computing Service in any of these three fronts. In any of these scenarios, the architecture will look something like this. Let's say that you are a researcher who's conducting climate research and you want to utilize ParallelCluster. That means you already have jobs which are HPC-based, from OpenMP or MPI-based jobs. Over here you require a NICE DCV setup for your visualization perspectives, how it will interact with your head node and your queue. You will have auto-scaling. You will create queues for your requirements based on whether it might be a high memory queue, a GPU-based queue, or a compute-based queue, and set this up in your VPC. Over there, FSx for Lustre will be your file system interacting with your various compute nodes, and you will be able to interact with this and run and schedule your jobs. The one benefit of ParallelCluster is, like I said, if you want to manage things yourself and identify which kind of compute resources you want to keep, this gives you the flexibility of choosing those resources. So if you have a more customized job which requires specific understanding and you want to take control in your hand, you can utilize ParallelCluster in that scenario.
Next we go to AWS Batch, and this, like I mentioned, is going to be a similar setup. The only difference is if you have more loosely coupled or container-based workloads which will run in parallel, AWS Batch will be a solution for you as you can start to run your containers, deploy them, and they will run in parallel across these various queues and do the work for you. The job scheduling part will be managed by AWS Batch, so the scheduling part is completed. Your only part is running the jobs and having this set up. Again, you have an option of choosing what kind of instances you would like on the back end, so that's one more control you will have in your hand.
The third part is the AWS Parallel Computing Service, and this is where we need the least operational overhead altogether. We are only focusing on submitting our jobs and letting them work on the back end, so all the parts of managing the cluster, managing the compute resources, and managing any kind of resources will be handled by AWS and Parallel Computing Service. You will be only focused on developing your job codes, developing your solutions, and building them. You will get the information in, get it done, and once you have the artifacts ready, you're happy to go. So this is what AWS HPC architectures could look like, and now we'll take a look at what it could lead to, what kind of research it could facilitate. For this we have an example of a case study where we'll talk about how University of Oxford has been benefiting from this. Thank you.
University of Oxford Case Study: Detecting Air Pollution Sources in the Indo-Gangetic Region
So I'm going to tell you a story. The story starts in the Indo-Gangetic region, so that's in the India, Pakistan, and Bangladesh area. Air pollution is a big problem. In fact, this is one of the most polluted areas globally, with pollution levels exceeding the WHO standard by up to 10 times, and that leads to over 2 million premature deaths annually. But because there is no precise data in terms of where the pollution sources are, the efforts to address this crisis have been hindered for decades.
One of the main pollution sources is a brickkiln. Brickkilns are kilns that are used to make bricks, and in the process of making bricks, the kilns emit dangerous pollutants such as carbon dioxide and particulate matter. Because of the rapid rise of the real estate industry, there have been a lot more kilns that have been built, and a lot of them are operating without regulation.
Now enters APAL, or the Air Pollution Asset Level Detection, which is a research project from the Smith School of Enterprise and Environment at the University of Oxford.
So APAL believes that clean air is a fundamental human right. They envision communities that are free from air pollution and believe that clean air should be prioritized for current and future generations. So how do they plan to do that? They want to detect all the brickkilns in the Indo-Gangetic region, which covers 1.5 million square kilometers of land. After detecting all of them, they want to create a map, an interactive dashboard so that communities can understand where they are and so that they can make more data-driven decisions.
Now for researchers, when you start working on research, the first step is to start looking at some of the data and then maybe use the local machine to run some analysis, do some geospatial data processing, and build some machine learning models. As they were working on this project, they realized that local machines couldn't handle the scale of data storage, processing, and machine learning workloads because they were working with millions of data points from different types of data sources.
Now the main data that they're using are satellite imageries. They chose satellite imagery because satellites offer consistent wide area and historical Earth observations so that it can help them to understand the region that they are studying. There are different types of satellite imageries. A lot of them are publicly available, such as the Sentinel-2 and Landsat projects. There are also some satellite imageries that are commercially available. There's also the difference between low resolution satellite imagery and high resolution satellite imagery.
So if we're looking at brickkilns with low resolution versus high resolution, you see the clear difference. If we're looking at the ones that are towards your right, those are the ones with lower resolution at 10 meter resolution. Brickkilns look like an orange blob with a bunch of pixels. You probably wouldn't be able to confidently say that this is a brickkiln. You can just say this might be an orange oval object, right? Versus if you're looking at the higher resolution satellite imagery on the left at the 0.5 meter resolution, you can confidently say this is a brickkiln. But the higher resolution satellite imagery can be expensive.
So what the researchers did is they started out with low resolution imagery using Sentinel-2. They built a random forest classification model to identify orange oval objects, and here is one object being identified. So this is a potential brickkiln. As we know, because for the low resolution satellite imageries, we cannot confidently say that this is actually a brickkiln. The result is they identified 31,000 brickkilns in Pakistan, which is a lot more than how many there actually are. So they understand that there are a lot of false positives.
But what they then did is to use these identified objects as potential brickkilns for further analysis. They then acquired the satellite imageries for only those areas that are identified and then acquired the high resolution imageries and then ran a high resolution workflow to understand the locations of the exact brickkilns. So here they are annotating the brickkilns using high resolution satellite imagery and then running computer vision models in order to identify where the brickkilns actually are.
Now with that, they are able to create an interactive map of where all the brickkilns are in the region, as well as some other pollution sources such as boilers, steel, and primary energy generation sources such as coal, fossil fuel, and more.
When you click into each of the assets, you will be able to understand the actual impact of these pollution sources, and this really helps the community to understand their areas better. They are expanding their work in Africa as well to help communities there make data-driven decisions. Tying this back to some of the best practices and patterns that Nitin has shared with us before, for the APAA solution, in terms of data, they use open data as the foundation of their analysis. They use Sentinel-2 from the open data registry so that they don't have to store the data. For storage, they analyze the data access pattern and only store the data that's needed.
In terms of compute, they run their low-resolution workflow and high-resolution workflow on EC2, but they understand that for different types of workflows they require different resources. For example, the low-resolution workflow will require more compute, and the high-resolution workflow will require more memory, so they are optimizing by choosing their EC2 instances accordingly in order to use the right tool for the job. As a result, they were able to process 1.2 million satellite imageries and saved over 17,000 compute hours with up to 80% reduction in infrastructure costs as well as 90% reduction in monitoring time and task runtime.
Solomon, the machine learning and deep learning specialist, has said, "We started with a social goal, and AWS was a means to reach that goal." What's next for the APAA group is that, as you already saw, they are expanding their work in the Africa region to help the communities there understand air pollution problems better as well. They also launched a volunteer program along with an app for on-the-ground pollution reporting that will help them capture additional data points to create more comprehensive maps of pollution sources. They are also looking to leverage additional AWS services such as Bedrock for geospatial foundation models.
Key Takeaways and Next Steps: Choosing the Right Tools for Sustainable Climate Computing
Now we're going to talk about next steps, a little bit of summary, Nitin. Absolutely, sure. Now we'll talk about the things that we learned, what we have as key takeaways from this. First of all, high-performance computing doesn't require those huge data centers powered by thermal energy. We can get it done sustainably. We have ways and means for it to utilize. If that's a concern about how you will set up, and most importantly, the speed of it, you don't need to wait to get your high-performance computing clusters up and running. You can get it done very quickly in a matter of minutes, get it up for you, and start with your research. That's something very useful for you.
The next part is identifying the data sources and data transfers. We don't have to keep moving data from one source to another. We don't have to worry about storing all the data which we need for our research at various places. We don't have to worry about those costs. We don't have to worry about those emissions. We have everything in a single place that we can just directly go ahead and use, utilizing the scalability and the flexibility the cloud has to offer, that AWS has to offer.
The third thing is understanding the data access pattern and the requirements, what kind of file system you will require for your computing cluster. It is important to have that set up in place. It might not be the case that a shared file system, a standard NFS-based file system like EFS, might be the best solution. We need faster solutions. We have solutions for them. If you need a slower solution, there are things available which are more effective for you. Choose and identify the same. You have the ability and flexibility to manage that on your own.
And the performance requirement: what kind of compute do you want to do, and how much effort do you want to do in order to manage that cluster for yourself? Add these elements together and finally choose the right tool for the job. Does your workload require a good stereographic setup for your 3D imagery process as well? If that's a requirement, certainly utilize NICE DCV and Research and Engineering Studio for AWS in that capacity. If using an HPC accelerated instance will be the right solution for you, utilize ParallelCluster and add it over there. And finally, if you just don't want to add any of these elements, utilize AWS Parallel Computing Service.
Understand your requirements. What is the goal for you? What's your business use case? When do you want to achieve it? What speed and cost considerations come into the picture? Analyze all of them, and you can find out what solutions exist. You can easily go in, and there's also an ability for you to try out these things and see which one fits best for you, then start using it.
For that, we have a few more resources that can be helpful. For example, we have an HPC workshop, AWS Sustainability resources, and Our Impact page. These are things that you can rely on and ultimately verify. This is a new initiative that we're launching to showcase how we're using AI to help solve humanity's most pressing problems. If you want to learn more about how our customers are doing so, check out this page on AI for Good.
Last but not least, level up your skills on AWS Skill Builder. This is a great place for you to build your cloud and AI skills your way at your own pace. If you can, please complete the session survey in your application. We'll really appreciate that, and thank you so much for joining our session. If you have any questions, we'll be happy to answer them and help you. Thank you so much for attending.
; This article is entirely auto-generated using Amazon Bedrock.










































































Top comments (0)