🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Formula 1 Extends Fandom with AI, Data, and AWS Media Services (IND3326)
In this video, AWS and Formula One demonstrate how they deliver F1 TV Premium, a multi-screen streaming platform processing over 5 million data points per race weekend. The presentation covers two major innovations: a root cause analysis system using Amazon Bedrock's multi-agent orchestration that reduced ticket resolution time by 95% (from 45 minutes to 4 minutes), and the technical architecture behind F1 TV Premium's synchronized multi-view experience. Using AWS Elemental MediaLive with epoch locking, HEVC tiled encoding, and timecode alignment across 24 video feeds, they achieved frame-accurate synchronization enabling fans to watch multiple camera angles simultaneously in UHD HDR at 50fps, launched successfully at the 2025 season opener in Australia.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: AWS and Formula One's Partnership Journey
Hi everyone. Have we got any F1 fans in the audience today? Lando fans? Max fans? Oh, look at that, Hamilton fans. I won't go through the whole list. Hi everyone, my name's Nick Morgan. I'm a solutions architect here at AWS. On stage today with me, I'm going to be joined by Jamie Mullan, a specialist solutions architect for Edge Services. And I'm delighted also to welcome Dave King, head of digital technology at Formula One.
Before we get into the meat of the presentation, I just want to introduce Formula One and mention that they're celebrating 75 years of this fast-paced sport. It's been truly a global sport, with the season running for 24 races in 21 countries and has become what we think is one of the greatest spectacles on Earth.
With a cumulative audience of 1.6 billion people and over 825 million fans worldwide, the sheer logistics and infrastructure is truly mind-blowing. If you've ever been to an F1 race and seen what goes on, it's amazing. But today we're going to tell you about a couple of examples of how AWS and Formula One deliver not only world-class operations, but also deliver the racing action directly to fans through the innovative world of multi-screen TV.
AWS started working with Formula One back in 2018 and became a global sponsor for the 2019 season. The partnership was renewed in November 2022, and we currently operate at the top tier of partnerships within Formula One. AWS works with many other sports leagues, such as the NFL, PGA, and Bundesliga, but the unique thing we saw about F1 is really around the data, and it's been the heart of it since the 1950s.
With Formula One, we focus on deep collaboration in three main pillars: transformation of data into racing intelligence, fan experience enhancement, and technical transformation through the joint technology funds that we run with F1. Just to pull out a couple of examples along the journey together, back in late 2019, F1 and AWS completed the CFD project to design the 2021 car, reintroducing the ground effect era to Formula One.
In 2021, F1 Insights powered by AWS was born. The Insights graphics are displayed during the race and provide key data points, allowing fans access to not only entertaining events during the race but also key metrics and predictive analytics. We expanded the partnership in 2022 and started work on the FA 360 program. In terms of generative AI, I'm sure you know that we produced the trophies for the Canadian Grand Prix in 2024.
More recently, early 2025 saw the launch of F1 TV Premium, a multi-screen over-the-top platform. Also this year, we collaborated on the launch of root cause analysis. Later we'll tell you a little bit more about these projects.
The Data Infrastructure Behind Formula One Racing
So as you can imagine, F1 is a really data-heavy sport, and Formula One represents what we think is one of the most data-intensive sports in the world. Each F1 car generates over a million data points per second. During a typical race weekend, we process more than 5 million data points across all cars. This is roughly equivalent to streaming 1500 high-definition movies simultaneously. The mass of the data delivers insights and innovation, and we fundamentally feel we've transformed the way teams compete and how fans experience the sport.
So how is the data captured? To illustrate this, we're going to use the Imola circuit in Italy. If you look at the topology of the track, about every 200 meters or so there are induction loops around the circuit, and these make up the segments that ultimately make up the sectors that the timing is taken from.
In addition to this, there are radio towers that receive the data from the car. That data includes live telemetry data and also the onboard video. If we move over towards what's known as the Event Technical Center, this is connected to the circuit via high-speed optical links. Think of the ETC as a mini data center, but it also houses a video production gallery, and this building is portable and follows the racing season around the world. If we then take the next step, we go towards the Media and Technology Centre, which is based in Biggin Hill in Kent. Again, it's linked through high-speed fiber optic links. But in this building, there's a full broadcast suite, including a video production gallery, there are three studios, and there's also space to accommodate commentators and events. The MNTC, as it's known, also houses other departments that take care of things like IT, both on-premises and cloud, racing systems, track systems, and engineering.
So how does that connect to AWS? The MNTC connects to AWS via Direct Connect, and these links are used to send a subset of the telemetry data and the finished video feeds. These feeds include the world international feed, all onboard cameras, and some pre-canned graphics feeds. This is where we ingest the video content for F1 TV. The telemetry feeds also include sanctioned data points that we use for processing predictions. So now let's take a look at what goes on in AWS.
F1 Insights: Machine Learning Models and Real-Time Processing Pipeline
We use a mixture of machine learning and mathematical models for F1 Insights. Amazon SageMaker is really the inference engine that sits behind this. Each model has been designed and trained to meet the needs of the F1 broadcast. The ML models are also retrained between race weekends to improve accuracy and keep pace with the drivers and the way the season is developing. So all models are stored in S3, ready for when the data arrives, but that's not the only use of S3 for Formula 1. As I mentioned before, the customer 360 data lake sits within that. This is a single source of truth for fan profiles and interaction data that's generated across the digital estates, things like F1.com, F1 TV and other channels. Amazon S3 is also used for the video archive, and there's over seven petabytes of historical data that sits within that archive.
If we take a look at the inference pipeline, this is where the data meets the model and produces new insights. In reality, the data is sent via Amazon API Gateway. Under the covers, the code is executed with AWS Lambda jobs and the results are stored in DynamoDB. The code that interacts with the ML models ultimately returns the results via API Gateway to the Media and Technology Center. Amazon DynamoDB is a NoSQL key-value store, as you're sure aware, but it holds the results of the analysis and all of the model training. This allows F1 to further enhance the model's accuracy and play back the data to support all the simulation activities.
Amazon CloudWatch is used to store log files for operational needs, and interestingly, the data that Amazon CloudWatch is driving at the core of these insights helps F1 keep track of how these discrete systems are performing across the organization. Having processed the results, the results are now available to the video production teams. This insight data is sent out to the various departments within the MNTC. So the graphics department, for example, prepares the graphics and queues them up ready for the production of the world international feed, and then the director can select which insights help tell the story on the day.
Bear in mind the end-to-end processing happens in under one to two seconds, so from the car to the track through into the ETC, MNTC, into AWS, process, and then back to the MNTC where the graphics are produced ready for the broadcast. It's really important for this data to be accurate and really help tell the story of the race on the day. So this leaves us with a business challenge, as it always does. This is a high-stakes environment where everything must work in unison to deliver the racing action to the fans, and given the interdependency and complexities of these systems, being able to fault find at pace is critical for F1's operations. So let's introduce root cause analysis.
Root Cause Analysis: Multi-Agent AI System for Operational Excellence
RCA allows F1 to quickly fault find and gain critical insights from CloudWatch logs from the applications that underpin each race. The project was initiated through the leadership of Ryan Kirk, head of cloud and DevOps at Formula One. The goal here is to harness the power of generative AI to deliver a step change in intelligent root cause analysis.
Let's take a look at how RCA works. Utilizing a multi-agent orchestration with supervisor and specialist agents, F1 was able to onboard over 500 applications into this chatbot-based system. Keeping a knowledge base of each application and centralizing all logs into Amazon CloudWatch, we implemented a solution that can perform automated analysis of specific applications to discover faults and report those faults back. With results evaluated and summarized, support tickets created, and the train of thought with the LLM stored for further and later analysis, this system has vastly improved the discovery and resolution of issues.
Built around Amazon Bedrock and other AWS services that collate and process the data, the vast majority of applications run in Kubernetes and these applications can auto-enroll with the RCA tool, identifying their core components. Bedrock takes over the agentic orchestration, calling supervisor and specialists to process tens of thousands of rows of log data to find issues and anomalies.
Let's take a closer look at a query. From the chatbot, a human support agent can ask it questions. In this case, we are asking about a database that supports the F1 API, or the Formula One management API. The first thing that happens is the application configuration is pulled from a lookup table. Metadata about the application's infrastructure and CloudWatch log locations are pulled. This information forms part of the initial LLM query and a suitable prompt is sent to a specialist support agent.
Each specialist agent is configured to support certain types of queries, such as interacting with CloudWatch, connecting and testing database connections, and trawling through things like network and firewall logs for analysis. Results from each specialist agent are sent back to the supervisor for analysis and processing to work out what the big picture is essentially with the problem. In this example, Amazon CloudWatch database connection and network logs have been queried and the following issues have been identified: there's a configuration error in Kinesis, there's a broken database connection, and also an overloaded network device.
The support agent then makes a determination based on the tuned prompts and creates a summary of its findings. At this point, given that there are errors, a support ticket is generated detailing what's been discovered. The specialist supervisor then returns its findings to the orchestration layer, again summarizing the issue and providing extra information about the ticket and root cause. Finally, the UI receives the findings and relays this back to the human support agent.
The chat agent keeps the history of the chat for future reference and analysis of the chain of thought. So in summary, using a multi-agent approach, F1 has been able to realize a step change in RCA productivity with a 95% reduction in full ticket resolution time and the initial triage down from 45 minutes to just 4 minutes. AI can programmatically trawl through log files at a machine-level pace, finding issues quickly and efficiently.
Automated ticketing puts issues into the system quickly, alerting support teams across the organization. So harnessing the power of this orchestration, domain-level specialist agents, and direct access to CloudWatch centralized logs, and being able to describe your applications in terms of metadata objects along with carefully crafted prompts, F1 has realized a step change in analysis for mission-critical applications.
Introducing F1 TV Premium: Evolution from Single-View to Multi-Screen Experience
If you'd like to take a look at our prototyping blog, we've also recently had a blog published on Wired as well. There's some QR codes at the end of the presentation. I'd just like to thank you for your time and I'm going to pass the baton over to Dave King, head of digital technology at Formula One.
This season, you're in the race, fully immersed in every Grand Prix live. Watch what matters most. Create your custom race view from multiple feeds. Look what he's done with the opportunity. Miss nothing with Christine 4K. See what every team sees in every moment. This is an inexplicable crime. And analyze what really happened, why and when. Because these are cars that have 1000 horsepower. Stream all this and more on up to 6 devices. A new way to see a new era. He's done it. He's won it. He's done it.
Hello everyone and thank you Nick for the intro. I asked for a little hype video. I find it's easier to start that way. I'm David King, head of digital technology at Formula 1. My team and I have the privilege of designing, building and delivering change across our consumer-facing digital properties. Some of them you can see on the screen in front of you. You've seen a little trailer for F1 TV Premium. That's what we're going to talk about in more depth, but we're going to get there in a minute.
Formula 1 has responsibility for our F1 web and app, completely relaunched in 2025, bringing vertical video, completely restructured site, live blogs during race day, short form video, vertical video, written and editorial content. It's the home of everything F1 for us. F1 Live Timing, which is what you see here on the mobile device, is the service we use. All the data that Nick talked about collecting from the cars—the telemetry, the positioning, the sector times, the mini sector times, the deltas between the drivers—is all accessible as a second screen experience, whether you're consuming the broadcast on F1 TV through Sky Sports UK, through ESPN, through ViaPlay, or whoever it might be. That's your second screen experience. You can see what's happening even if the broadcast isn't actively covering it.
F1 Fantasy has two games here. One traditional fantasy game where you select your drivers, select your team, select your boost for a given race, and predict who's going to finish in the top three. You compete against your team, your colleagues, your friends in leagues. Then there's the F1 Predict game, which is a little more ad hoc. It launches typically on a Wednesday before an event with 10 questions for each event. Who's going to be on the podium, who's going to gain the most positions in the race, all of that kind of good stuff. It's quite good fun. I'm not very good at the first one, not bad at the second one, and it's easier to dip in and dip out.
Then there's F1 TV, the thing we're going to talk about in more detail. F1 TV is a product that we launched back in 2018. It's our own internal OTT platform. It's available in 132 countries. In some, it's there on its own. In others, it sits alongside our traditional broadcasters. We work across all mobile devices, tablets, big screen devices, and on web. That platform has been there since 2018. We've always had access to all 20 cars on the grid, so all the onboard cameras, two international feeds, and two data channels that support the viewing experience.
Within that platform, there are two subscription tiers: a replay tier with full event replay and VOD content, and F1 TV Pro, which has always been our live experience. The problem with that live experience was that you've got all of those feeds that you can view, but we only let you switch between them so you could watch one of them at a time. You could switch to watch Max and take that P2 in last weekend's race off the grid, but you couldn't see what else was going on. What we set out to do was build F1 TV Premium.
Building Multi-View: Synchronization Challenges and UHD HDR Implementation
So what is F1 TV Premium? Does anyone use F1 TV first off? Are you a premium subscriber? A few. Enjoy it. Useful? Don't worry about Disney. F1 TV Premium is something we've been working on for about 18 months to 2 years prior to its launch at the start of this year. We wanted to bring a really compelling multi-angle experience to our sport. We're in a pretty unique position to have access to all of the live feeds all of the time for every event. It's different to multi-view for a single sport like football or American football where you can watch different games and it doesn't matter that things are slightly out of sync.
One of the key challenges we were trying to address here was creating that multi-view experience by ensuring that everything was in sync, and sync is a thing that you will hear me talk about again and Jamie when he comes on to talk about the media services element of this.
There were a few more things that we added just to make this a little more complicated. Historically, F1 TV has carried HD feeds in SDR at 50 frames a second. The last jump we made was 3 or 4 years ago when we moved from 25 to 50 frames a second. But we set ourselves quite an aggressive challenge to go from HD HDR to full UHD HDR. Whilst that had been around in F1 and through traditional broadcast before, anyone who works with media services and distribution and OTT knows that that step from HD to UHD is quite a large one.
We also did the same for the on-board cameras, moving them up to HD HDR supporting multi-view, but we wanted to make sure we got that spread of devices as wide as we could with that same compelling experience. And then the less interesting bit but allowing that playback on multiple devices, so creating a multi-view on 5 or 6 devices at the same time. I'm going to play this video and I'm going to talk you through what's happening, but this is on a tablet device. Showing multi-view in action, for those of you who haven't seen it, it puts in context a little more of what we're going to talk through.
This is coming up to race start. Drivers have just finished the formation lap. What you can see here is the international feed in the main box. Down the right hand side, a full 22, 23 tiles of video that are available to the fan to select. We've brought one of them on here, we've brought Lando's onboard camera on. I think he's in P2, P3. The international feed is still continuing. Some drivers disappearing down the pit lane. Someone must have changed their power unit.
What we were trying to get to here, and this was the nervous bit, is when those lights go out. Everything moves in sync. Because if it doesn't, it's a bit rubbish. It gives away the story. Your eyes get caught on one thing, you're misled by something else. So we've pulled up here 3 different feeds: the international feed, the onboard camera, and the driver tracker, all of which should move at roughly the same time. So 5 lights, and out we go, and the international feed, the onboard camera, and the driver tracker all move at the same time.
This isn't just a gimmick, this isn't just a thing that we built for this presentation. You can go back to any full event replay on F1 TV in this past season and go to that lights out scene and pull up your view and see that we did what we said we were going to do. Once we've got all of this video, we also have all of the commentary, all of the on-board camera audio, all of the subtitles. So it doesn't matter whether you're watching an onboard camera that you want to listen to. You might be interested in listening to Max's comms with his engineer. You can switch that on, you don't have to be watching Max's onboard camera.
What we did constrain was the layout. It's probably the one kind of compromise we made. We could have been fully flexible, but we honed this down to give what we think is the most flexible and still compelling experience with the options that are available rather than a complete free for all of moving video all over the place and resizing. This is what we think works well. The swap of video is pretty slick as well.
Video Workflow Architecture: From Circuit Capture to Cloud Distribution
We're going to start talking about that video workflow because there are 3 significant pieces that needed to come together in tandem to make this work really well. The first bit Nick talked about in terms of capture at the circuit. Nick mentioned that all the video for the onboard cameras, so cars traveling at 200 miles an hour down the straight, is radio frequency transferred off the cars around a fiber network at the circuit and back to our event technical center. That's one element of what we carry on F1 TV.
We then have 25 to 30 track cameras around the circuit also connected to that fiber network, also back to the ETC at the circuit. We then ship all that back to our MNTC, our Media and Technology Centre back in Biggin Hill. That's where the final cut, the production element of F1 TV but also the broadcast happens.
We then push it into the cloud, and magic happens. However, it's not quite as simple as that because there are three or four different classes of video. They all follow slightly different paths. The on-board cameras are relatively raw and untouched when they arrive with us to be processed and ready to be distributed to the cloud, so they follow one path. You then have all of those track cameras and all of the radio frequency cameras from camera operators up and down the pit lane, capturing all those great shots, whether off the pit wall or off the driver doing a tire change. Hopefully not the driver doing the tire change, as that would be problematic. They all arrive with us at slightly different points in time, and if we go back to one of the things we were trying to achieve here, it was making a really great experience. Everything has to be in sync.
Our first job before we even get to AWS is the alignment of feeds within our media and technology center and insertion of a timecode, simply a time reference that says this feed, this feed, and this feed all share the same timecode. So before we pass things into AWS, we get those timecodes in order. Before I hand over to Jamie, there are some architectural principles that apply more to the media services element of this solution that we wanted to try and adhere to. This wasn't our first rodeo. Our target while bringing in UHD HDR was that we would have a single media live ABR encoding ladder. We didn't want multiple outputs here. We didn't want multiple instances of Media Live trying to do the same thing.
We also wanted to continue to support our existing F1 TV Pro experience, that single view experience, because that's a large part of our fan base already, and premium and multi-view isn't for everyone. There's a slightly different price point. Also, device compatibility was a consideration, so there were reasons that we wanted to protect that experience but also protect ourselves operationally to make sure that we could provide the best experience for our fans. In terms of multi-view, we wanted it to be as flexible as possible on the client's side, so the power was in the fan's hand. I said that we constrained it, but I don't think we constrained it that much.
We didn't want to be tied to the video stack on operating systems. There was always going to be a problem with iOS 26, Android 14, and you were constantly fighting a battle. For the level of control that we need to create that in-player experience, it had to be a player that was built from the ground up. The same applies in terms of giving broad device support. We also talked about synchronization, and we'll talk about it again. We got everything in sync at the start of this as we pass it into Media Connect and into the media services pipeline. We need to maintain that right through the media services pipeline and through the distribution to a client device.
AWS Elemental Media Services: Achieving Frame-Accurate Channel Synchronization
With that, I'm going to hand over to Jamie Mullen, Senior Specialist Solutions Architect in the media space, who can talk you through in more detail. So, hey everyone, my name's Jamie Mullen. I'm a specialist solutions architect here at AWS. I've got the privilege of deep diving into the AWS element of media services that help bring the stream part of F1 TV Premium to life. We're going to focus on three challenges. We're thinking about how to make sure these feeds are aligned, as Dave described. We're then going to think about how these streams are really encoded and how they are packaged. And then finally, we're going to dive into how to encode to deliver a multi-view experience. We'll actually look at some of the industry terms we see in the industry to enable multi-view. But let's focus on the big one first: channel and content synchronization.
If you're not familiar with AWS Elemental Media Services, this is what we see from a typical workflow. We have AWS Elemental MediaConnect, which is our secure and reliable transport service for live video. We then have that as an input into AWS Elemental MediaLive, which is our cloud-based broadcast grade video processing service. We then have AWS Elemental MediaPackage, which is our just-in-time packaging and origination service where you can implement DVR-like experiences and also things like DRM. And finally, we've got Amazon CloudFront, which is our content delivery network through which you can distribute. Based on your viewer or your client, you can obviously pull whatever manifests and stream that you want.
However, customers like Formula 1 operate with more resiliency than just one region, especially for those big events. These multi-region architectures actually have the same synchronization challenges for failure. I'm going to show you what this looks like for a failure before relating what this actually looks like for F1. We have the same architecture as before, but we're actually split across two regions. We have Region 1 and Region 2. The only addition here is that we have AWS Elemental Live, which is our on-premises encoding appliance that in our case is doing the contribution encoding.
What happens if something goes wrong in Region 1? The viewers and clients are currently streaming from Region 1, but what happens in this case? Well, clients might actually start seeing black frames. Hopefully, in the media and distribution world, you have an eyes-on-glass operations team looking for issues, and they eventually spot this issue and want to trigger a failover to Region 2. However, if this is a real team, there might be some delay in spotting this issue, and also there might be some time delay in actually carrying out that failover.
Once that failover is complete, the clients can then get a new URL for the second region via a URL vending service of some sort. Then they can start streaming again. However, when clients rejoin that stream on Region 2, there might be some challenges around whether the two feeds are aligned between Region 1 and Region 2. Are they going to have to go back into the stream and scrub backwards to find where they were before? The key challenge here is how do you keep things in sync and how do you do this automatically.
Something called timecode, which Dave mentioned, is part of that answer. If you don't know, timecode is a time reference embedded in each video frame. Timecode can be super important in the broadcast and distribution world. How we're going to look at it today is we want to use it for synchronization, so we want our downstream components to use this timecode to help us solve this problem. Timecode is part of the answer.
How does it apply to a failover scenario? Well, we have a couple of features that we're going to run through quickly just to show you what that might look like. One of which is called seamless cross-region failover, and what we mean by seamless is that we want frame-accurate alignment across all your origins, so in our case Region 1 and Region 2. In the manual failover, for example, before this potentially wasn't seamless, nor was it automatic.
We have the same architecture, but this time we're going to start with the timecode on the on-premises Elemental Live device or appliance. That common timecode is applied to that contribution source, so that both regions receive that aligned contribution feed. We then have MediaLive which is configured with epoch locking. Basically, it uses the embedded timecode from that contribution feed. The timing source is set as the input clock. So MediaLive outputs via CMAF ingest into MediaPackage. When you combine CMAF ingest and epoch locking, what you actually get is a regular segmentation cadence based on that epoch time. MediaPackage can also use this as a stateless synchronization to predictably package the output content.
What happens if something does fundamentally go wrong with the video, whether it's slate inputs, incomplete manifests, or even a missing DRM key? Well, MediaPackage has the ability to 404 its endpoint. Finally, the missing piece of this puzzle is CloudFront. CloudFront you can use something called origin groups, where you specify a primary and a secondary. In our case, Region 1 is the primary and Region 2 is that secondary. The other important part of this is that it's configured or you can set something called a failover criteria, and that is based on those HTTP status codes. When MediaPackage does 404 that endpoint, that request will automatically get sent to the second region via the secondary.
But what if it isn't such a fundamental issue with the video? Like what if it's just one issue where you have intermittent black frames or frozen frames going to that one region? How do you handle that? Well, building on the previous architecture, we have something called media quality-aware resiliency to help solve this. On AWS Elemental MediaLive, we generate something called an MQCS score for each segment, and MQCS stands for Media Quality Confidence.
This score is based on multiple input parameters such as input source loss, black frame detection, or frozen frame detection. The score ranges between 0 and 100, with 100 indicating the best quality. Media Package then provides an in-region quality-based failover capability, so it can select the segment that has the higher score. Beyond that, it can actually signal on the egress, and that is signaled via CMSD.
Another important piece of this architecture is that previously we were using origin groups with default settings. You can actually use quality-based origin selection instead of that default in the latest architecture. The way it works is that a get request is sent ahead to each region, and then it selects the segment that will provide the better user experience to that viewer. We have looked at a flow, for example, but how does that actually relate back to F1 and F1 TV Premium, especially around epoch locking and that predictable packaging element? They want to achieve the same stream alignment, but across multiple channels, not just one channel in multiple regions.
If you have not seen the AWS Element in Media Live console settings before, this is how you can configure this for epoch locking. As Dave said, they provide 24 video feeds for a live session for viewers, and that common timecode is applied across all those feeds. Media Live is then configured with the output timing sourced with the input clock. The output locking mode is set to epoch locking, and then finally the timecode source is configured as embedded.
Another really important aspect to this is encode configuration. F1 also applies really consistent encode parameters for things like frames per second and GOP size, and disables things like scene change detection to keep all those channels encoded and packaged consistently. The end result is that not only can you get timecode alignment in a single channel, you can actually achieve this across multiple channels. You can see F1 Live in the bottom left-hand corner with an on-board camera of Lando Norris in P2, and then you have Max who is P1. You can see the timecode is all configured similarly.
In summary, with timecode, Media Live, and Media Package, you can actually achieve server-side channel synchronization. That is the first tick in the box as they were saying. Traditionally, when you deliver content to viewers, that might only be one show or program at a time. When you want to consume content like Formula One, you might want to consume more of those feeds at the same time. I am going to quickly cover some of the industry themes on how you could implement multiview and how it actually changes your video encode workflow.
Multi-View Implementation Approaches: HEVC Tiled Encoding Solution
The first one is server-side multiview. This is where you have a feed precomposed upstream and the client device can simply consume it like any other OTT feed. However, this presents a challenge because there is no real flexibility. If you want more flexibility on the clients, you are going to have to produce more feeds to do this. Taking F1, for example, they do 24 video feeds that you can consume in F1 TV Premium. If you have that 2x2 grid layout like you see on the screen now, that is over 10,000 new streams required to cover every combination. However, there is no downstream synchronization to handle because it is all done further upstream in your video workflow. The only thing you need to think about is that you have those new streams and you might have to change them a little bit for your viewer experience and just look at how that looks on a device.
The second one is multiplayer multiview. This is where you have multiple streams available to be consumed, and a user can consume one or can decide to consume multiple streams at the same time with more than one feed. The device has to have multiple players to play back these channels, and this can be potentially resource heavy as all the players are sharing that device's underlying hardware. The main challenges with this approach really are how all the players are aware of each other, how you keep the feeds in sync, and especially around synchronized actions. If you want to pause, play, scrub forwards and backwards, how do you achieve it all on the same device? And finally, how do you provide wide device coverage?
That is, do you have to implement this on many different operating systems to basically get the same experience across multiple devices? However, what changes in your encode profile? Well, everything we discussed around channel synchronization could be a way of trying to solve this or help with the timing problems. Technically, you can use your existing encode profiles. There are no major changes there. You just might have to move tweet pod blocking with Media Live and Media Package. And then finally, the last one is single player multi-view with HEVC tiled encoding.
Advanced spoiler: this is actually how Formula One decided to implement multi-view. With tiled encoding settings configured on Media Live, Media Live can generate a series of independently encoded tiles, which the player and the decoder can use to decode any combination of tiles across any bit rate and resolution. Displaying what makes sense for that device or what a user has configured. The player itself has the technology and the implementation to use those tiles. From the HEVC segments and merge them together before going via a single decoder. In this case, we're pulling basically the middle resolution or the middle rendition, and that's being pushed through that single decoder.
So how would this change your encode profile? Well, first off, you'd have to start delivering HEVC. Next, you'd have to configure your HEVC tiled encoding settings in Media Live. But equally, you want to also enable epoch locking and all those timing considerations that we spoke about in Media Live and Media Package. Tile configuration is based on multiple parameters. It could be based on a tree block size, and the tile width and height might be based on the resolutions or renditions in your bit rate ladder. But for this example, we're going to keep it really simple, and we're going to say our tile size is roughly the size of rendition one.
So we've got a 1x1 grid on rendition 1. We've then got that same tile size, so we've got 2x2 on rendition 2. And then on rendition 3, we'll continue to use that same tile size, but we've got 9 tiles there, so we've got 3 by 3. Now this is really important: you need to apply that common tile size across all your renditions so that it can actually be used by the player. So what does it look like in Media Live config? You've got that height, width, you've got tile padding, you've got some motion vector config to be disabled, and finally you've got that tree block size setting.
So in summary, this is how the streaming part of F1 TV Premium is delivered. We have an aligned time code from source. We have epoch locking in Media Live with that consistent encode configuration for GOP size and frames per second, and that's used across all the streams. Media Live then uses HEVC tiled encoding. Media Live outputs to Media Package via CMAF ingest to allow Media Package to do that predictable packaging. And then it's available for consumption and distribution via Amazon CloudFront. Then on the player side, the player can pull those HEVC feeds to decode segments and tiles to how a user has configured on their multi-view setup for that F1 session. So for this one, we're pulling the top bit rate of the car, we're then looking at the middle for the cockpit view, and then finally the lowest rendition for that track view. And then you can see the viewer configured experience in that bottom right corner.
Launch Success and Future Vision: Reflections on F1 TV Premium Deployment
F1 TV Premium went live at the start of this season, right? It was Australia, and it was overnight. How did that launch go? I'm really keen to understand. You were there. We had been working on this for a year or more before Australia this year. Start of season is always quite twitchy anyway, and doing a fairly significant product launch made it even more challenging. It was touch and go, but we had set ourselves up in Biggin Hills, where our media and technology center is. We met with our AWS colleagues, our development partners' colleagues, and the player organization was there as well to really team together. Nothing quite like working Australia hours in the UK when you're launching a new set of features.
We were pretty confident. But when you've sold something in the weeks and months leading up to that, something you've proven in testing but not on thousands and thousands of devices around the world, it's always a little nerve-wracking. Come the end of practice one, I think we were all kind of pinching ourselves and going, "Cool, this is going pretty well." It's probably the smoothest product launch I've been involved in throughout my career.
The session today was really focused on that video synchronization. We indexed heavily on how you did it with AWS Elemental MediaLive and MediaPackage, but it wasn't just that part. There was a wider piece of work to do at both ends to give that end-to-end video synchronization. It's those three parts, with the media service sitting right in the middle, and everything you've articulated is exactly how that is managed.
There's a whole piece of work that we had to undertake before the start of this year within our broadcasting domain because that was critical. It was within our gift to get feeds aligned before they went into AWS, and that involved a fairly significant amount of effort from non-digital and broadcast engineering. For that first event, we had about a 250 millisecond round trip latency from our M&TC at Biggin Hill over our diverse fiber paths to the circuit. We were there in our high-speed track test, a non-televised session before the weekend begins, where we put safety and medical cars around the circuit and do a full system test.
We were there looking at the live video output, making sure that things were visually aligned. We tried to find markers on the circuit from different camera angles to get that certainty that what we configured in each of the delays across all of the 24 feeds was good. The other variable I didn't actually touch on was our international feeds. We delay within the building because we take international commentary contribution from broadcasters, so we have to let the commentators see the feed before we can take it back and embed it and align it in our feed.
As well as our own F1 TV production feed, which is produced in-house with both commentary at the circuit, presenter hits in the pit lane, and wraparound content, all of this stuff happens at different times. It's pretty touch and go, but we went into that first session pretty confident and came out of it really confident, talking about what we could do next and what the next experiment or change that we want to undertake would be. I guess we're already at the end of the season, right? It's flown by, and the way that the championships kind of rolled out, you've got that three-way fight between Max, Oscar, and Lando.
The video sync part is going to be really interesting to watch unfold. The video sync was a thing that we were concerned about with drift or change throughout the season. We haven't touched it since probably race two or race three, and that was like real fine tuning of literally one or two frames. In fact, I think the recording I showed was from race one. If you look very closely, Lando's cameras are out by about two frames versus the international feed. I'll take that—two-fiftieths of a second.
This is prime. This is why we built it to get that big screen experience from multiple angles at the same time. As I say, it's really easy to do this for a multi-event sport where time doesn't matter and it doesn't matter that Messi scored four seconds before someone else scored in a different match. It's like this is all happening at the same time, and when the lights go out on Sunday in Abu Dhabi, I'm going to look at the analytics and metrics and see how many people have got the international feed, Max, Lando, and Oscar as their multiview to see what happens going into turn one.
Are you going to have that set up or are you going to have a little bit further back just so that you can actually see what's happening? No, I'll fortunately be in the position during the weekend, probably being in the M&TC on Sunday for lights, so I'll have a production gallery in front of me as well. Not only did you do multiview, but obviously you've touched upon doing UHD HDR and also adding that HEVC tiled encoding. That's a big change.
Huge. I mean, I said right at the very start that our biggest kind of change in the video pipeline prior to this was probably that jump from 25 to 50 frames per second, which actually I think anyone who was an F1 TV subscriber back when we were at 25 frames per second was really thankful that we did that. That small change made such a big difference.
It was a huge jump, though. We're talking about trying to have the same encoding profile for UHD HDR with HEVC tiled encoding and with epoch locking in place. It's quite a demand to place on something like AWS Elemental MediaLive. Finally, it was a massive launch with lots of things going live. How did you prepare for that launch?
A lot of what we did was in the weeks and months leading up to it. You don't just turn up and expect it to be great. We leveraged what is now known as AWS Unified Operations for Media. That was pre-event, so typically a service that you deploy when you're having a big launch, a big change, or a big event involving media. The engineering and service teams helped us review what the architects had specified and deployed as best practice and worked through all the different configurations we had, making sure the right alerting was in place. They went through those configurations with a fine tooth comb, but then they were there, not physically with us but on Slack, through that first race weekend and pretty much every race weekend since then to a lesser degree, just continuing to provide that level of service and assurance that they still care and we're still operating as we expect to.
I've got to ask this one for all the fans in the audience for Formula 1. What is actually next for Formula One and F1 TV? The F1 TV one's easy. Two more onboard cameras next year. Beyond that, some other features are coming. It's a watch this space situation, but there are some things coming next year for Formula One that will be awesome. We've got a big championship decider, but it's the end of an era for the cars that we're seeing. So there's going to be a lot to do both in our second screen experience and in terms of our live timing setup, in terms of the broadcast, what those new regulations mean, how the teams adapt to them, and who's winning that initial charge.
There are some driver changes as well. There's going to be a huge amount going on and a lot of storytelling to do. It's our job as digital to make that as available and do that storytelling to the fans. Very exciting 2026 season then. So just quickly wrapping up, if you want to learn more about media quality or air resiliency or want to do some further reading on the F1 TV Premium or the root cause analysis that was presented, here are the links to their retrospective blogs. Finally, if you are interested in leveling up your skills in AWS Cloud and AI, I thoroughly recommend you check out AWS Skill Builder, where there are over 1000 free courses and free resources available.
With that from Nick, Dave, and myself, thank you very much for joining our session. We really hope you enjoyed the session. Just a quick reminder, can you please fill out the survey on the Reinvent app. Thank you again for your time, and I hope you enjoy the rest of Reinvent.
; This article is entirely auto-generated using Amazon Bedrock.











































































































Top comments (0)