🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Amazon Leo: Building a Low Earth Orbit Satellite Network on AWS (AMZ302)
In this video, Gary and Andy explain how Amazon Leo (Project Kuiper) built a Low Earth Orbit satellite system using standard AWS services. They detail solving five impossible problems: building custom 100 Gbps satellites with ARM-based SOCs and custom ASICs, scaling physics-based simulations on Graviton instances with FPGAs, managing Space IoT for thousands of satellites using serverless applications, optimizing network planning for 1.6 million ground cells with quality of service commitments using SageMaker and Fargate, and securing data with hardware encryption and AWS KMS in Nitro Enclaves. The satellites travel at 17,000 mph, performing handoffs every 30-40 seconds. They demonstrate Direct to AWS (D2A) connectivity enabling customers to connect remote locations like ocean wind farms, mining sites with Outposts, and film sets to their VPCs through Customer Virtual Networks, accessing services via PrivateLink while maintaining end-to-end encryption with MACsec.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Building Amazon Leo Satellite System on AWS
Good afternoon, everyone. Thank you for coming and joining us today. We appreciate you joining us instead of going to see Werner's presentation. Obviously, we're more exciting than he is. It's true. Today we're going to be talking about Amazon Leo and how we built this Low Earth Orbit satellite system on top of AWS. I'm Gary, and with me is my colleague Andy. We'll be taking you through this today.
But before we get started, I just want to highlight that this is a session about Amazon on AWS. Anyone out here familiar with Amazon? Maybe these are some brands that you're familiar with. Anyone have a Fire TV at home? Yeah. A little bit of Amazon Music, groceries delivered from Fresh? How about watching NFL football on Prime? Yeah? Alright, so you're familiar with the kind of work that we do.
What's interesting is Amazon is actually treated like any other customer on AWS. They don't get a lot of special treatment. They ask for a lot of special treatment, but they don't always get it. What that means is you as customers of AWS get all the same benefits as Amazon does. They utilize the same resources on AWS that you do. Now, you may do different things with them. You may not have events like Prime Day.
These are just some data points from this past summer's Prime Day. Numbers like 8.9 trillion log events in GuardDuty, big numbers. Kinesis streams millions, 807 million records per second. 1.5 quadrillion ElastiCache requests. I can vouch that my wife in her searches on Amazon that day at least delivered a million of those. Now, your services may not require this type of scale, but the benefit you get by having Amazon be a customer on AWS is they really push the envelope and they force AWS to push the envelope, force us to do better, force us to be able to deliver bigger. And as a result, you as customers of AWS benefit from all of this.
Amazon is constantly pushing us to higher and higher heights. But today we've gone even higher. L minus 7 minutes status check to proceed with terminal count. Atlas systems propulsion, go. Hydraulics, go. Sorry, there should be audio. Oh, you guys can hear the audio. I'm sorry. Go LO2, LH2. And I'm like striker, you have permission to launch. Proceeding with the count. Three, two, one. Yeah. We are now moving at 4,000 miles an hour. Nominal. Mechanical proceeding nominally. All Kuiper satellites have been deployed successfully. Congrats, Kuiper.
Amazon Leo, or some of you may recall it being named Kuiper, Project Kuiper, now pushing the envelope off the planet. We are going now into space. We're building satellite systems, satellite systems that are bringing reliable connectivity across the globe for consumers, businesses, and enterprise and government use cases. AWS is part of that process. Okay, that's what we're going to be talking about today.
So let's look at our agenda. It's actually quite simple. First, we're going to talk about how Amazon Leo built its service using AWS. Again, I want to reiterate they're using the same AWS services as all of you have access to. Second, we want to give you some guidance on how you can take advantage of Amazon Leo for your own services so you can apply it to your business. Okay, so to get us started, I'm going to hand over to Andy to tell you more about how Amazon Leo was built.
Engineering the Impossible: Custom Hardware and ASICs for 100 Gigabit Space Communication
Great. Hi folks. Can you hear me? Great, cool. So we are going to talk about how we built Amazon Leo on AWS, and I've got to tell you, building something at the scale of Amazon Leo is pretty much impossible. And it's not just one impossible problem, it's like five impossible problems starting from building at the lowest layer custom ASICs and custom hardware to build a 100 gigabit per second communication satellite to building a custom e-commerce stack to run telecommunication services for the world. This is not a talk about how AWS solved all those problems for us.
This is a talk about how we use AWS to not reinvent the same solutions to commodity problems that everyone else has already invented, and instead enable us to focus on our core business problems.
So let's start with an overview of the Amazon Leo system. As you might imagine, we have some satellites and some ground stations. If you zoom way out from Amazon Leo, it looks a lot like a cell network that you use to connect to the internet from your phone every day. Instead of a phone in the Amazon Leo system, we have a customer terminal on your roof or somewhere with a view of the sky. It connects wirelessly not to a cell tower somewhere nearby, like in the parking lot, but to one of our satellites in low Earth orbit about 600 kilometers above the Earth.
You can't run a fiber optic cable up there, so instead we have to build another wireless link, our gateway link, to connect our satellites to our ground gateway antennas, which are in hundreds of locations all over the world. Once we're back down on the ground, we can do normal things. We can connect our ground gateways to fiber optics and from there to our network core and then to the internet or to your private networks. By building the wireless links ourselves and by building the ground network using AWS Edge sites and AWS Direct Connect, we're building the fastest, most secure space-based network in the world.
When we started on this a few years ago, we had to solve five impossible problems, so we had to start somewhere. Where we started was by building our satellite. This is not something you just go buy, a satellite that does 100 gigabits per second communication in low Earth orbit, and you can build thousands of them. We had to build this from the ground up. This is all our technology.
The way you start with something like this is we're going to reuse as much as we can even though we are building from the ground up. We built with a lot of COTS, commodity off-the-shelf parts. If you look at our satellite, it's kind of like a data center rack that we launched into space. It has a commodity Ethernet switch, and it has a bunch of line cards to run our network, and they run Linux, and we do IP networking all throughout the satellite. It's just kind of like a big rack in space. Our flight computers are connected to the switch and all the line cards.
But it's also a little bit like another thing, which is it's like launching a car into space. It has a CAN bus like your car does, which is a normal terrestrial thing that in your car connects your computer to all of the sensors and actuators in your car. On our satellite we do kind of the same thing. Our flight computer connects through a CAN bus to some really interesting space devices like star trackers and reaction wheels and our propulsion unit and a torque rod. With those devices we can sense where we are in space, and we can control the attitude of our satellite, and we can propel it.
Though they are interesting space devices, we again build them on normal terrestrial automotive SOCs like a Cypress SOC. For the most part, our custom hardware that runs in the data center rack in space uses terrestrial SOCs like a Xilinx Zynq SOC. But you can't just buy all the parts for this. The part you can't buy in particular is the RF modem that is the heart of the Amazon Leo network. We had to build that ourselves. That's a custom ASIC. We built it from scratch.
Even when you're building an ASIC from scratch, you don't start from scratch. You build with a lot of other people's IP. You use ARM instruction set, and that's what we did. We built that from the ground up using reusable components, and we had to spend years optimizing our code, optimizing that chip, multiple revisions. That was definitely the hard part. That's what we wanted to focus on.
Testing Space Systems on Earth: Scaling Satellite Simulations with AWS Graviton and FPGAs
But it turns out no one actually cares if we built the satellite. What everyone cares about is actually does it work in space. Our biggest responsibility in Amazon Leo is not actually to get a communications network working. It's to not screw up low Earth orbit. We take that space responsibility really seriously. If we mess it up, our kids will be cursing us when they're our age.
The thing that we have to do that's most important to building Amazon Leo is that we have to figure out how to iterate and test space things before we're in space. Space is a really interesting environment. It's hot and then cold. It's dark and then bright, and it really matters that it's dark and bright because we get our power from the sun. So we have to test hundreds and then thousands and then actually millions of combinations of how our satellite is going to behave in space.
This is something AWS can help us with. We do a lot of simulation, stimulating all these combinations, both normal things that we want to have happen, like flying, and abnormal things, like getting reset by radiation at just the wrong time. We test all these scenarios.
How do we do it? Well, we basically do it by building a physics-based game engine and putting our satellite in it, or rather, putting a model of our satellite in it. Everyone's probably done this, right? You have to build a big game engine loop in which you have a really precise physical model of everything you want to simulate. We have a very precise physical model of our satellite, for example, how it responds to torque, how big it is, how much it weighs, what its inertia looks like, center of mass, all these physical properties. We put this in the simulation.
Then all those interesting space devices I mentioned, like reaction wheels that spin and torque rods that also spin and propulsion units that propel us in one direction, and our sensors, we also have precise physical models of all of those things. So then in our simulation, we wrote this code. It has to be us, it's our satellite after all. In our satellite and in our simulation, we're constantly updating the position and velocity and rotational acceleration and angular velocity of our satellite in all different dimensions so that we can just propagate it forward.
Then we're also applying forces to it in our simulation. This is what it does. We're applying gravity and forces produced by our actuators. We're also applying our controls, controls from our flight computer that produce these forces, and we're just updating that loop over and over again, just like a game engine would. We built this and got it working on our laptop or on one Linux box.
But as I mentioned, we have to simulate every combination that we're going to see in space before we fly, because we really don't want to crash. This is something that AWS can help us with. We basically use AWS to scale up our simulation. We built a serverless application for spinning up constellation simulations. You use API Gateway and you can say, create a simulation for me. You can ask it, is my simulation running? And you can say I'm done with my simulation, kill it. So that's the API.
Behind that, we're running a bunch of asynchronous workflows that start up EC2 instances and wait for EC2 instances, and tear them down and retry and load our code, start the simulation. This is what the Step Functions workflow does. The coolest thing about this is that inside there, when we're actually ready to run this, we can run our actual flight code on Graviton instances. As I mentioned, we're not reinventing instruction sets or anything, we're using ARM, and Graviton is an ARM-based platform, so we can run our unmodified flight code on bare metal EC2 instances that run Graviton.
What's even cooler than that is, I mentioned that we're using standard automotive SOCs, which often, or always, have FPGAs on them. We're using FPGAs, Field Programmable Gate Arrays, for the most time-sensitive operations that we do on our satellite. Those typically are radio, like listening to radios and sending radio signals, generating RF, or packet processing at gigabits per second. We can run that FPGA code on the FPGA of bare metal instances with FPGAs, so we basically can run the satellite pretty much unmodified on bare metal instances.
This enables us to scale up our testing to where we need it to be to launch the Amazon Leo satellite. We did this before we launched our protoflight satellites a couple of years ago, as you might have heard. Once we launched, we found that it really worked pretty much exactly the same way. It was almost indistinguishable. It was pretty spooky that day. We were happy, of course, that's what we wanted to have happen, but we always worried, and it turned out to be unfounded. It totally worked.
Space IoT: Managing Thousands of Satellites with Predicted State Models
So this is the first problem, the first problem we had to solve, and AWS helped us scale up. But it's not enough to just have a satellite that hangs out in orbit and doesn't crash. What you really have to be able to do is you have to be able to control that satellite.
Again, we take space safety really seriously. It's way more important than actually serving customers. If we screw this up, we don't get to serve customers, so we have to be able to talk to our satellites all the time. We have to talk to thousands, up to thousands, but right now hundreds of satellites, and we have to be able to monitor them. We have to know what their current state is and we have to be able to command them, like please steer around this thing that you might otherwise run into. So it's really important that we always be able to know where they are and always be able to command them.
And we don't mean always in the next second. We mean always, you know, if you can't talk to your satellite after four hours, you have a serious problem because after eight hours you might hit something. So we just always have to be able to talk to our satellite. Maybe hardware is failing, doesn't matter, you have to talk to it. Maybe it's spinning in space and it doesn't know which way is up or which way is down, it doesn't matter. You have to be able to talk to it, so we have to build from the ground up, not a normal IoT system, not a normal IoT system that you connect to the cloud, but something kind of like it.
As I mentioned, Space IoT, we have to always be able to talk. Because we have to always be able to talk, we optimize our management network, our device cloud for space IoT, not for high throughput or not for high availability, but for eventually being able to talk. And the inevitable outcome of this is that we end up with low throughput because we're just optimized for being able to talk in any context. So we have to talk to our whole constellation and monitor it and control it with really pretty low throughput.
And what this means is that we are constantly working with, we're constantly optimizing which satellites we're going to talk to when, because every second of radio time is really important. We have to make the most effective use of it, so we're optimizing this. And in order to optimize which satellites we talk to when and which telemetry we're going to get from them and which commands we're going to send when, we have to know what is where they will be at any given time and whether we will need to send them a command like it is time to steer. And what we also need to know is whether we will be able to talk to them successfully because there could be RF interference or other satellites around or something like that.
So in the space internet of things, we're constantly communicating based on predicted state. And to know that, we have to create a precise model of every object in low Earth orbit, where it is right now and where it will be in basically every five second interval from now until we need to stop planning, which is typically several days from now. And there's tens of thousands of objects in low Earth orbit, and so that's many, many, many things to have in memory at one time.
So what we do is, I should mention, people have known for a long time how to propagate the state of an object in low Earth orbit forward in time pretty much as far as you want. If you know that an object is here with a position and a velocity, you will definitely be able to predict pretty precisely where it will be in several days or even in several months. But what we need to be able to do is to be able to query that model to understand where our satellite is in relation to our ground gateways at any time in the future so that we can optimize our communication.
So we built a service for this, and the cool thing about it is that it's basically a normal serverless application. And so the way we do this is we run that model and we create a complete model of where all the objects in the sky will be for several days, and then we stick that all into an index. And then we can query the index anytime we need to know what is in view of a ground gateway or what ground gateways we can talk to from a satellite.
And so we knew the model, we could run the model, but when we needed to scale up the model, we used AWS technology that you all can use also to scale this up. So anyway, the purpose of the constellation is not just to be a constellation that is safe in space, it's to serve customers and to provide internet access to the world.
Network Planning at Scale: Optimizing 1.6 Million Ground Cells with Quality of Service Guarantees
So, let's do a brief overview of how the Kuiper constellation works. I mentioned that we have some satellites in low Earth orbit and we're going to have about 3,200 of them. They communicate wirelessly to our ground gateways on Earth. If we get data up to a satellite from your house, we're going to have to get it back down to the Earth because that's where the internet is today. So we have ground gateways. And the internet is not by our ground gateways. Our ground gateways are more or less required to be in faraway places away from people, out in the middle of nowhere, in a wheat field. So the internet is not exactly there.
The edge of our network is a point of presence. It's a network point of presence that adapts between our internal Kuiper protocols, which are optimized for space communication and fast-changing networks. We have to adapt between those protocols and normal internet protocols, BGP, IP, that kind of thing. And that's what we do at our Kuiper point of presence. So we connect all of our ground infrastructure using standard fiber optics. In particular, we connect our ground gateways to the nearest AWS Direct Connect location. Because once we're at the AWS Direct Connect location, those are paired with internet providers right there. They have really good internet access, and there are hundreds of them all over the world.
But what you're buying as a customer of Amazon Leo is you're buying access to our network through a customer terminal on your roof. And you're buying access to a radio network in which we're using radio frequency spectrum. About 1.5 gigahertz of spectrum in the Ka band to communicate from our satellites to the customer terminal and back again. And then we have a separate radio frequency that we use to talk between our satellites and our ground gateways.
Low Earth orbit is, if you're in low Earth orbit, you're basically moving at 17,000 miles an hour. And what that means is you're going to pass over any spot on the ground in a few minutes, about 3 minutes or less. And what that means for a customer terminal or any spot on the ground to which we're providing service is that we're switching which satellite is providing service to that spot about every 30 or 40 seconds, maybe up to 3 minutes, but typically 30 to 40 seconds. So we designed the whole network around that handoff, around making that handoff and making it smooth.
And our customer terminal basically stops transmitting to one satellite or receiving from it and then starts receiving from the next one. And this is called handoff. Now, if you imagine that we have, I have just told you that we're basically using radio frequency communication and we're doing this handoff, we're doing handoffs on the ground gateway as well, so the network is constantly changing. And so you may imagine that some of these fail. Yes, some of these do fail, and so we're constantly planning for redundancy and failover if any of these links fail. So not only do we plan primary links between our satellites and customer terminals, we also plan backup links between our satellites and customer terminals and satellites and ground gateways.
So we're providing service to the world, to every spot on the Earth, and we have satellites that can provide service to many spots on the ground below them, and they require a connection to one of hundreds of ground gateways that are on the surface of the Earth. So you can see that in our network we have millions of combinations of how we can provide service at any given time. And this is a massive optimization problem because we're not just trying to connect our network, we're trying to optimize our network. We're trying to provide the fastest service to every spot where customers are using just the right amount of our radio frequency so that we're balancing service between all the different cells in our network.
We just have to simplify this. Customer terminals are coming and going just as customers register for our service and turn on their satellite. So we have to simplify, and the first way that we simplify is that we don't try to individually plan every customer terminal.
Instead, we provide access to cells on the ground. We provide service to hexagons on the ground because hexagons are the bestagons, and this helps us simplify the network down to just providing access to 1.6 million spots on the ground. But it's still 1.6 million combinations, and we still have all those satellites and ground station combinations.
But it's even worse because all those customer terminals have different quality of service commitments. In Amazon Leo, we're not just providing best effort service where you pay us and we give you whatever service we can come up with. We're offering quality of service to enterprise customers who pay for 100 megabits per second, and we will give you 100 megabits per second. So that means that when we plan our network, we have to plan with visibility into all the demand and all the cells on the ground.
We can do this. We can point our radio frequency at any spot on the ground. We can choose how much radio frequency to give to any spot on the ground. We can change where we're allocating that radio frequency very quickly, many times a second. So we have the knobs in our physical layer network to optimize this problem, but as you see, it gets even more complicated when we add quality of service commitments.
But beyond quality of service, many factors in the environment also affect what service we can provide, whether the angle at which the satellite is in relation to the ground spot, how far away the satellite is from the ground gateway itself. All these factors affect what service we can provide to any spot on the ground. And the customer demand is changing on a minute by minute basis, just kind of like a normal diurnal trend, or there might be a World Cup or some other interesting event right now. All of these affect demand and where we should allocate our radio frequency on the ground.
This is a super data intensive process to compute the right plan for our network at any time. So one thing about this is that we are not the first ones to build a wireless network that involves beam pointing. There are well known algorithms for allocating radio resources to spots, and some of them use mixed integer or linear programming. Generally speaking, there are algorithms for this, but when we add all of these factors into them, then it's just our algorithm for our network.
So we built this algorithm, and we can run it for the continental United States. We can run it for the world. But when we want to scale it up and we want to incorporate all the real-time data that we need about the world, about our customers, and about the current state of the network, this is when we use AWS. The way we use AWS is to manage our data. We manage all of our subscriber data and kind of slow moving data in SageMaker Studio, and all of our fast moving data, like the current state of the network and the current state of customer demand, comes into our network planning system through managed Kafka.
Then we run this big distributed computation that I mentioned on Fargate instances. Once we have a global plan for our network, we push it out to every node at the same time. The way we do that is that we have an active connection through a network load balancer with every node on the network, and so we can push it out using web sockets. So AWS didn't solve the network planning problem, but it enabled us to scale up our solution to the network planning problem and not have to reinvent data architectures from scratch.
Building the Fastest Space-Based Network: Leveraging AWS Direct Connect and Optical Lasers
So the cool thing about low Earth orbit is it's really close by. You know, it's only 2 to 4 milliseconds away from any spot that we're serving on the ground. And another cool thing about it is that the speed of light is faster in a vacuum than it is in fiber on the ground. So if we're going 600 kilometers up into space, that's actually a lot faster than going 600 kilometers in a fiber on the ground.
And so, after we built this really cool network, it would be kind of a shame if network latency were dominated by fiber on the ground. We turned 8 milliseconds into 100 milliseconds by having the wrong ground network, which we could do because our ground gateways are in a lot of different places.
But what we do instead is that we've built a huge ground network with hundreds of sites all over the world. We've connected them using redundant fiber paths to the nearest AWS Direct Connect locations, and we host the edge of our network, the Point of Presence that I mentioned earlier, in AWS Edge sites. Then from there, we're right out to the internet from AWS Edge sites. When we need to get to your workload in EC2 or S3, Gary's going to tell us about the APIs we provide to enable that, but that also goes over AWS's Direct Connect network, which AWS has been optimizing for years.
Probably everyone here has been affected by a fiber cut at some point. You thought you had great internet access, and suddenly it's gone, and it turns out that someone with a backhoe did just the wrong thing at just the wrong time. When we're talking about ground gateways all over the place in farms and fields, we kind of expect that from time to time, yes, there will be fiber cuts. But what's cool about the Amazon Leo network is that we have a second backbone for our network, and that's our backbone in space.
We basically have fiber optics without the fiber. We have just the laser part in space. Instead of going over fiber on the ground, we can use our optical lasers in space to transmit data not to just the nearest ground gateway to the satellite, but to a different ground gateway through another satellite. As you can see, this is not, there's no supermarket here.
As I mentioned, we have our ground gateways all over the world at hundreds of sites, and they're connected to AWS's Edge sites, of which there are hundreds. From there, they're connected to the internet and to AWS regions everywhere. So I just told you about how we're building this really fast space-based network involving a couple of hops of radio frequency and thousands of kilometers of fiber, and it's spread all over the world. You should kind of wonder if we're taking really good care of your data security, and we are.
Security from Space to Ground: Hardware Encryption, AWS KMS, and Nitro Enclaves
Data security is built into the lowest layer of the Amazon Leo network. Our custom chip that I mentioned at the beginning of the talk that runs our network and generates that radio frequency communication, our modem, can also do encryption and decryption at gigabits per second. It has a hardware security module that keeps the keys for that private, and it's tamper resistant, and we've tested it. It definitely works. The other end of that encrypted connection is all the way at the edge of our network in the POP that I mentioned, and so it's doing encryption and decryption at gigabits per second for all the customer terminals that it's connected to.
Everyone here has built a secure service, and you know it's really easy to get the encryption part wrong, but then of course it's really easy to get the access to keys and identity management part wrong also. But by building on the same cryptographic primitives that you all use and that other AWS customers use, we can know a few things. One is that we're using kind of known standard implementations of cryptographic algorithms through AWS-LC, and we're also keeping really good track of our keys.
Our keys are stored in AWS KMS, and they never leave our, we never touch them in our data plane code because they go from AWS KMS into Nitro Enclaves. We only get a short-lived time-derived key to use in our data plane code for fast encryption at gigabits per second. We protect access to all those keys using IAM just like you would, so that only the services that should have access to our keys do have access to our keys. Another cool thing about this is that when we're sending your data over the Direct Connect network, we're also using MACsec on the ground.
So first, if someone were to intercept our radio, they would not see your data. They won't see IP addresses. They won't see TLS headers. They'll see encapsulated, fully encrypted data, and they will not see your data.
But then if they go find our fiber, they won't even see our data. They'll see random encrypted bits protected by MACsec. So our network is secure in transit from customer terminal to the edge of our network, and that is enabled by the same cryptographic primitives that you all use today.
So those are some of the key problems that we solved in Amazon Leo to enable us to get into space and to constantly communicate with our satellites, and then to plan our network and to optimize it to give great quality of service to all the spots on the ground that we're serving. And then also to secure communication and to scale our network to global scale. So now Gary's going to tell us about how you can use that network from your AWS network. Thanks, Gary.
Connecting Amazon Leo to Your AWS Account: Customer Virtual Networks and Direct to AWS
Thanks, Andy. That was super cool. If you didn't get the data points, 17,000 miles per hour these things are moving at. They're doing handoffs about every 30 seconds. There was one there, there's another one. This is amazing technology that we're building. Andy walked us through how they used AWS to build these solutions for Amazon Leo. Now, is anyone here in the audience planning to build a low Earth orbit system? Anyone? Really, good for you. How about the rest of you? I bet you're hoping to use an existing low Earth orbit system like Amazon Leo to improve your systems, right? Okay, great, a few hands.
Well, that's what we want to talk about. Leo can be used for multiple different use cases. It could be used as internet connectivity in remote areas, right? That's kind of a personal use case. But more interestingly, probably for customers of AWS like yourselves, are the industrial use cases. You may have equipment that runs in a far off place of the world, or you may have equipment that runs off the ground and actually in the middle of the ocean, somewhere where bringing fiber or connectivity is either difficult, expensive, or inconsistent. Leo can help us with all these use cases.
So Leo is a networking capability from your AWS perspective. So let's look at how we can connect to your account from a networking side. We're going to walk through just a first, as a really simple customer internet use case example. You're a customer, you're somewhere else remote, you want to get to the public internet. Well, you connect to your customer terminal, your antenna, your Leo antenna. The Leo antenna connects up to the Leo satellite up in the sky, right, around 600 kilometers in the sky, I believe the number was. That satellite connects down to a ground station. From the ground station to a POP and out to the internet. Pretty simple use case. Doesn't utilize AWS from your account's perspective.
However, what if we did want to integrate it with our account? Well, what can we do then? Leo has this concept called the Customer Virtual Network. And what that Customer Virtual Network does is it creates a virtual device, this Virtual Network Interface, the VNI, which is a representation, a virtual representation of your Leo satellite dish, your antenna. Every antenna basically has a one-to-one connection with a Virtual Network Interface. That Virtual Network Interface provides IP addresses, provides DNS, and provides some configuration to what your antenna can do.
And you may have different use cases for different antennas. Maybe you have one use case that only allows for private internet access, meaning it only allows access to your corporate network. You might have another antenna that allows public internet access, and it's controlled at the antenna using the Customer Virtual Network. Now if we look a little bit further, there's another component that we have in here. There's a component called the Virtual Network Gateway. With that gateway, we can now control what our network or what our antenna can do.
We've got three different gateway opportunities. One, connect us right to the internet, again, more of a consumer use case. The second will connect you directly to AWS, Direct to AWS, D2A. We like using number acronyms at AWS, so that's the D2A one. The third one is a private network interconnect, a PNI.
The second one, the D2A, will connect to your AWS VPC. The third one will connect to your private network through some kind of meet-me point, network-to-network connectivity.
Here's what it would look like and why you might want to use it. Let's say you have a Leo network. You've got an antenna, but you've got a corporate data center, and in that corporate data center you have some private infrastructure. You've got some servers, some storage, some databases, but you need to be able to access it remotely. Through the private network interface, you can connect your Leo antenna to your private data center. Again, this is part of what Leo does. This isn't an AWS service that you would do in your account.
But now let's assume that you did want to do something in your account. You wanted to access services that were within your VPC. Well, in this case, if we use the Direct to AWS connection, we could do that. Leo connects to your VPC and services that are within your VPC, let's say EC2 or SageMaker, are now accessible from the Leo antenna. But what about services that aren't in your VPC? There are managed services that AWS has. Well, through PrivateLink you can access those as well. So what we've got here now is services that you're managing in your VPC, your business use case now accessible anywhere in the globe with a Leo antenna.
Now we want to connect this to your AWS account. And one thing I want to highlight is that Leo is not an AWS service. Leo is a service that comes from Amazon, from the Amazon Leo team. It is not an AWS service. It is not a resource in your AWS account. But we want you to have a seamless interface. We want you to have an experience that looks as if you are building with native resources.
One of the ways we do that is to give Leo some permission in your account. And we do that through a new temporary delegation process from IAM. This is a new service that was announced just a few days ago. And with that delegation, the Leo system has access to your AWS account. And it uses that access in order to provide you with telemetry of what's happening in the Leo service within your AWS account. So as the service runs, telemetry information is populated into your AWS account in CloudWatch, CloudTrail, etc. So again, Leo is not an AWS service, but we are working in such a way that it will appear like an AWS service in your account, meaning that you can collect the telemetry, you can set up the triggers, you can act on events, and you can take action if something goes wrong.
Real-World Use Cases: Wind Farms, Egress Inspection, and Remote Video Surveillance with Leo
Now let's look at some D2A Direct to AWS Leo examples. Again, we're builders here. We want to figure out what we're going to build, how we're going to build it. We've got a couple of examples here that we think may align with hopefully some of you here in the crowd. What if you're running a wind farm? And you're running a wind farm and you've got a wind farm somewhere out in the ocean. A lot of wind in the ocean. So you put a wind farm there, not a lot of network connectivity in the ocean, unfortunate part. But we want to run a wind farm, and we want to make sure that we are optimizing our wind farm for health, predictive maintenance, all those good things.
There are some patterns that we know have been utilized often in AWS to monitor something like a wind farm. AWS IoT Core is a common way to do that. It collects telemetry from your wind farm, receives that telemetry, stores the data in Timestream so you know what's happening over time. Maybe you're visualizing it with Grafana. But you may also have a trigger that occurs when anomalies may be detected from your wind farm. And those anomalies may be processed within something like ECS or Lambda. To understand whether or not this anomaly actually needs us to take action. And if we do need to take action, maybe we have an event that then sends signals back to our wind farm to slow down or to halt, maybe opens up a work order to have a technician get on a boat and get out there to fix the thing.
But again, this wind farm is out in the middle of the ocean. No fiber out there, no internet connectivity. Well, this is where Leo comes in. If we utilize the antenna together with our remote wind farm and we utilize the Direct to AWS connectivity, we can now bridge this wind farm in the middle of the ocean back to the exact same patterns that we would use to build digital twins and monitor our wind farms in the cloud.
There's a link at the top, and there are links in the next couple of slides as well for references to some of these architectures.
Let's look at another example: egress traffic inspection. I think we all know that for ingress traffic, traffic coming into your network, everyone puts up a firewall, right? We want to make sure we block any nefarious traffic coming into us, so we have a firewall. But what's becoming even more common is that teams want to have egress verification, meaning I don't want data leaving my system because my data is proprietary to me. I don't want to be leaking data to the outside world, so I may have egress traffic inspection. This is a very common pattern, something we do in the cloud, maybe using something like AWS Network Firewall.
But what if I'm operating remotely somewhere? I have a remote site, maybe a mining site. Maybe I'm mining gold, and I don't want my competitors to know how much gold I've mined. But my mining process requires a lot of compute, which is great, and I need to do that on premises. So what do I do? I use Amazon Outposts. With AWS Outposts, I can have EC2 compute running in a private subnet on premises, which is great. When I need to launch a new container or I need to launch a new piece of software on it, I may actually need to have internet access. Maybe I need to pip install or maybe I need to yum update something, right? So I need to have network access outbound.
But again, I have requirements from my security team that say no egress traffic unless it's inspected. How am I going to inspect it? Well, by connecting my Outpost back to my network in the cloud, back to my VPC through Amazon Leo, I can actually force all of my traffic coming from EC2 running in my Outpost on premises to go through my firewall endpoints that live in the region in my VPC, out through my network firewall to check for allowability, whether it's allowed to get out or not. Only if it is will it then egress to the internet, right? So I've got security. This will make sure that none of my gold mining information gets leaked outside of my network. Only my pip installs and yum updates do, but my gold mining data does not.
One more example: Amazon has a video division, right? We make movies. Sometimes we make those movies in very remote locations. When we make those movies, we come out on set with a bunch of equipment, a bunch of very expensive equipment, and we need to monitor that site to ensure it's secure. So we need to put up security cameras in that remote location. But it's remote, and remote means hard to get network access, hard to get fiber, yet we need to do this. I've got administrators and system ops that need to watch those cameras to make sure that nothing gets stolen, nothing gets interfered with.
Well, a common way we would do this in the cloud is we would use something like AWS IoT Core and Amazon Kinesis Video Streams to stream those videos from the cameras. We would create some review process using something like AWS Lambda and Amazon OpenSearch Service. Maybe we even create some dashboard that can be accessed by our administrators so that they can see the cameras, right? Again, this is a common pattern to deal with cameras in the cloud. The problem is our cameras are far remote. There is no connectivity.
Well, here once again, even in this pattern, Amazon Leo can help us out. Using an Amazon Leo antenna connection with a Direct to AWS connection, we go through a VPC, we go through PrivateLink, and we're now able to connect this entire infrastructure running in the cloud with cameras that are in a remote location on site, watching our trucks, our cameras, our film cameras, our crew to make sure everyone is safe in this remote location while they're filming the next episode of your favorite show. And so again, these are your AWS accounts, these are your VPCs. You are just getting connected to your remote access, or getting remote access via Amazon Leo.
Conclusion: Extending Your AWS Network to the Edge of the Globe
All right, quick recap, so we're coming up on time.
We talked about how Amazon Leo utilized the same AWS resources to build out their Leo service that you have access to. No special treatment just because we're all Amazonians. Or maybe a little special treatment, but no special treatment when it comes from an AWS resource perspective. They have access to the same resources that you do. So for the gentleman over there who is building a low Earth orbit system, you've got all the tools in your hands that you need.
The next thing we talked about, we looked at some examples, is how in your AWS accounts, in your VPCs, you can utilize Leo to extend your networks to the edge, basically anywhere around the globe. What we're really curious to know, and we had a couple of examples of how we think people can use Leo, but we're really curious to know how you're going to use Leo. How you, when taking Leo into your AWS account, what are you going to do with it? Maybe you'll tell us. Hopefully you will. Hopefully you'll get a chance to, because I'm really curious how Leo is going to help you out.
If you'd like to learn more about Leo as a service, Amazon Leo, that link will take you to a page with more information. We'll give you the opportunity to sign up for beta or to track availability, general availability of the service. And with that, I'd like to thank everyone for their time today. I promise you we were more interesting than Werner. I can't promise you that. I hope we were more interesting than Werner. Thank you all very much. Enjoy the rest of your re:Invent.
; This article is entirely auto-generated using Amazon Bedrock.




































































































Top comments (0)