🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Driving Fan Engagement with Data, Analytics, & AI (SPF308)
In this video, Jake Lee and Ashwini Rudra from AWS explore how sports, media, and entertainment organizations use data, analytics, and AI to enhance fan engagement. They present case studies including the NFL's fan platform processing 90 billion rows of data with Amazon Redshift, Bundesliga's 17% app engagement increase using Amazon Personalize, and PGA Tour's Tourcast generating AI commentary for 100% of player swings with Amazon Bedrock. The session covers building fan data platforms through layered architecture including ingestion, processing, and storage using services like AWS Glue, Amazon Kinesis, and identity resolution techniques. They discuss choosing between data lakehouse and warehouse approaches, and demonstrate creating AI-powered solutions using Amazon Bedrock Agent Core for multi-agent architectures. Key takeaways emphasize hyper-personalization, robust fan data profiles, and leveraging the AWS AI stack to build new fan experiences.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Competing for Consumer Attention in the Digital Entertainment Landscape
Good afternoon everybody and thank you for coming to the sports forum. We're really excited to kick off your re:Invent experience with our session here and with all the exciting activations with our sports partnerships here in the sports forum. My name is Jake Lee. I'm a Sports Solutions Manager at AWS and I'm joined by my colleague Ashwini Rudra. Together the two of us are on the team at AWS that supports a number of our sports customers.
Now what likely brought you here to the sports forum and our session today is your interest in sports or in entertainment. For any of the sports fans in the audience today, I really hope that your fan experience has been much better than mine. Because mine has been just a roller coaster of emotions and heartbreaking losses. But even though I've gone through that roller coaster of emotions with my sports teams, I still find myself going back to those teams, purchasing the merchandise when I see their promotions, I buy tickets to the games and I'm watching the content, and I'm interacting with their digital content online.
These are just examples of how organizations in sports, media, entertainment, and games can use data, analytics and AI to drive fan engagement. With the availability of so much content available to consumers today, organizations in sports, entertainment, and games have to find ways to compete for the consumer's time. There are only so many hours in the day but many options that organize and the fans have to interact with between them. Sports dominate live broadcasts. When the fans don't have exciting games to watch and they get tired of what's happening on the field, they have their TV channels and their streaming platforms that they can watch their favorite programming on. When they get sick of that and want to watch something else or do something else, they have games easily accessible to them on their mobile devices, tablets, computers, and consoles.
The intent to frequently return to entertainment offerings starts with a positive consumer experience. Organizations need to spend so much time trying to find ways to retain their digital consumers. Since we're in the sports forum today, we'll analyze this through the lens of sports and fan experience. Within a fan experience, customers and fans are expecting exciting content to be on broadcast and on TV when they can watch them. They want their experiences to be valuable and extremely entertaining. They want these games to be thrilling and exciting.
When they interact with the content from these sports leagues and sports organizations they want these experiences to be streamlined and very clear. They want them to be easily accessible. They want to find the information and the media and the content that is important to them very quickly and they want to be able to navigate through these experiences very seamlessly. They also want interactive content. Fans and consumers want to find ways where they can interact with others that are watching the same thing as them. But they also find a lot of value when they're able to contribute to what's actually showing on the screens.
Finally, personalization is extremely important because with so much media and content being created every single day from games to live sports, they want to be able to find what they care about. Fans want to find content that is easily accessible on digital platforms and mediums, so they want to find personalized experiences through curated recommendations that are designed specifically for them. Experiences are generating data on all of these platforms for each individual consumer, and these data points are being captured across all of the channels that they're interacting with.
Building Fan Data Profiles: From Basic Analytics to Generative AI
A fan who purchases a ticket or signs up for an account creates a profile, establishing that foundation for that fan profile for an organization to capture. This includes their geographic and demographic data, so the foundation of that profile. Based on what they're purchasing, such as the jerseys or any merchandise, they have the preferences of those fans.
On the digital front, user behavior and interactions with applications and online sites are also being captured in real time. This includes the number of videos and articles users interact with, how long they watch, and all of that information is being captured to design content for these users and enhance their data profiles. Additionally, associations are being made between different fans. Fans supporting one team or league who are closely tied and related to other fans might share the same interests and support the same organizations. These are just a few examples of how data is being captured across different domains for building fan data profiles.
The concept of using data on fans and profiles is not new. In the very early stages of using these capabilities in sports, we used data and analytics to create fan data profiles and enable personalization and segmentation. Machine learning models were also being used to create advanced stats for sports, helping viewers watching broadcasts understand and become more versed in the games being shown on television. These stats in the earliest days were used to explain the biggest plays in sports and were very logic-based, explaining only things like how fast a player is running or how far the ball is traveling.
As capabilities in analytics, data, and AI became more advanced, organizations became more able to create new experiences. Advanced stats became more diagnostic and predictive in nature and could be applied to players, teams, and the different formations they played against. With the onset of generative AI, media and content became much faster and more accessible. Organizations in sports, media, and entertainment are finding ways to create solutions that help accelerate the process of finding content, putting it online, and placing it in front of fans and consumers. Information retrieval became even faster, and the process of solving low-level problems became much faster as well.
As organizations become more comfortable with utilizing AI capabilities, they look to enhance the fan experience and create new things and experiences for fans to enjoy. Personalization that we discussed earlier has become hyper-personalization, incorporating machine learning methods to increase accuracy and lead to greater and better recommendations for fans. New experiences and new digital products with AI-powered features are becoming increasingly commonplace. Organizations are finding ways to create solutions not just for their own teams but also for partners and other organizations within their entertainment ecosystem to improve the fan experience.
Real-World Success Stories: NFL Fan Platform, Bundesliga Personalization, and PGA Tour's AI Commentary
Today we would like to focus primarily on two areas: the fan data platforms that manage all of the data profiles and creating new experiences and digital products using AI-powered features. We look to some of our sports partnerships where we have built similar solutions to gain inspiration on how we can do this across sports, entertainment, media, and games companies. The National Football League, or the NFL, created a unified view of all of their fans using an AWS platform built on AWS services and partner solutions. This fan platform processes over 90 billion rows of data with over 250 dimensions. All of these records are refined, resolved, enriched, and standardized into over 70 million fan data profiles stored and managed within Amazon Redshift. Every single day, over 1,000 data feeds are being processed and standardized using Amazon Glue and Amazon Kinesis data streams.
The result of creating this data platform gave them a better view and understanding of their fans by at least 4 times. This led to a 2 to 3 times increase in opt-in rates where fans and consumers opted in for their emails and promotions.
Looking at how preferences drive personalization, we can examine the Bundesliga, where they personalize their app engagement. Soccer is a very personal sport, and between Bundesliga and Bundesliga 2, there are 36 different teams. Between these teams, the Bundesliga determined that on average, a fan supports about 4 different organizations and teams. That's approximately 1.4 million different combinations of fan preferences. This becomes even more complex as you incorporate factors such as demographics, preferences, and your interactions and behaviors.
The Bundesliga uses Amazon Personalize to better analyze and process these different combinations so they can curate better recommendations for articles on their Bundesliga app. After launching this feature and enhancement onto the Bundesliga app, they saw a 17% increase in the time spent within the app. Users who spent more time in the app saw a 67% increase in the number of articles that were opened because of that personalized recommendation system. Out of all the different sessions that users went into, they saw a 32% increase in the sessions where users actually went in and opened different articles.
Looking at how organizations can create an AI product to create a new fan experience, the PGA Tour created Tourcast, an AI commentary system. Traditionally on broadcast, the PGA Tour was only really able to cover about 25% of all players' swings throughout every single tournament, and all of those were limited to that broadcast. The PGA Tour is able to process and manage 53 million different data points every single tournament weekend, and this is all being managed and stored within DynamoDB. Using all of these data points, Amazon Bedrock is being used to create AI-generated commentary for every single shot that is taken throughout a PGA Tour tournament.
This commentary is not only describing the shot based on the data points, but also providing context on what is happening on the course and what that shot means for that player within the context of that specific tournament. As a result of this, the PGA Tour was able to cover 100% of all players' swings on all of their tournaments. What's incredibly impressive is that from the time it takes a player to make their stroke on the course in less than 10 seconds, that AI-generated commentary and context is created and displayed within Tourcast's AI commentary system.
Architecting Fan Data Platforms: Layers, Patterns, and AWS Services for Data Ingestion
We've taken a look at how we can use data analytics and AI for a different number of our sports organizations and our customers who have created these innovations. I'm going to invite Ashwini to talk to you about how you can create these using data and analytics services on AWS.
Hi everyone. My name is Ashwini Rudra, and I'm a Principal Solutions Architect with AWS. I have almost completed 8 years, 4 months, and 13 days here. I'm giving data because data is a very strong point when you're building any architecture, and any informed decision you want to take requires data. It brings trust to build a data platform. I want to give you guidance on how you can do it, but I want to know how many of you have a data engineering, ETL, or data analysis background.
What I'm going to do is use a lot of AWS services, but I might not get time to dive deep into each one. However, we'll learn about the patterns, which are very common patterns to build any fan 360, fan data platform, or fan genome platform. You can name it anything.
What you'll see is that any data platform and fan 360 platform divides into layers. A very common layer is the data source layer, where you decide what data sources or different data sources you are gathering about your fans. Then you'll see the ingestion layer, where you decide how you are going to ingest that data. You might have 60 or 70 sources, and each one might have three different characteristics: volume, variety, and velocity. You then have to process the data according to your data pipeline. There could be multiple pipelines in your whole data platform—one pipeline for your data engineering team, one for the analytics team, and one that goes directly to fans or third-party vendors or your partners.
You then have to store data according to their needs. Your machine learning models understand data in a different format than your analytics team's tools, so you have to massage the data and store it in different formats so that you can share information accordingly within the system, to your fans, or to your consumers. There are two more layers you can merge into one, or you can have them as separate layers—it's up to you because these are all logical layers. One is the intelligence layer, where you prepare data and share it with your data engineering or data scientist team so they can experiment with the data and tell you what extra value you can derive. The other is the activation layer, which is about your metrics. You share exactly with them what the KPI is, what the metrics are, what the dashboard shows, and what they want to achieve. So there are exactly two ways you are doing this nowadays: one is for the known—what you already know—and one is for the unknown, where you have no idea what might be possible.
I'm showing one more diagram here. You can see these are the data sources again, and they vary from customer to customer, team to team, and league to league. Very common sources include your league's own sources, data you're getting from teams or clubs, ticketing data if it's for fans, subscription information if they subscribe to different channels or subscription mechanisms, and merchandise information. As a fan, you might go and buy a t-shirt, and that information is held by that particular company. As a league or team, you want to ingest all that data, store it somewhere, and integrate it using different tools. We'll dive deep into each of these common patterns in the next few minutes. You then store data into different purpose-built databases. It could be a data warehouse like Amazon Redshift, or you could store unstructured data directly—either images or text-based data—in a data lake. Or you might want to transform your data based on volume, size, or variety, either using Amazon EMR or directly using AWS Glue. You then make that data available to your activation layer, and everyone can access it.
One thing I want to tell you is that you can see there's a data catalog and governance layer, which customers generally use with AWS Lake Formation or AWS DataZone services to govern that data and ensure that data lineage and how data is changing is trackable. I'm sharing the same concept here with a different AWS diagram and services. All these patterns I'll be talking about more, and here's one example. You have data ingestion patterns, and you want to know how to ingest the data. There could be 60 or 70 sources. One common approach you might have heard about is AWS Glue Crawler. It has more than 60 connections where you can directly connect to those sources, crawl the database, and create a data catalog. Another approach is when your customer or partner exposes their database on an API.
You have three or four options. You can use containers with Kubernetes to connect with those APIs, massage the data, get the information, and put it into an S3 bucket, which we call the raw zone or bronze zone—multiple nomenclatures exist. For small payload sizes of less than 10 MB, AWS Lambda shines. You can directly use a serverless approach with AWS Lambda to call the API, get the payload, and store it into AWS services. If your APIs support batch processing, you can use AWS Batch. ECS is also another option, which I haven't mentioned there, but again, it depends on how the data is exposed.
For real-time streaming, I see a lot of confusion when people ask whether they should use Kinesis, Kafka, or Flink. Again, it depends on your payload size, how quickly you want to store data, and whether you want to transform your data during processing or later. For small data with less payload size and lower variety and velocity, go with Kinesis Data Streams. If you want real-time transformation and analytics, go with Kinesis Data Firehose. For Flink or MSK, consider larger payload sizes, and MSK or even RabbitMQ work well depending on the velocity, variety, and speed of your data.
The recommendation is to store data into S3, which is your raw zone. If your customers or partners store data using file upload and just want to replicate it, AWS Transfer Family is available, which understands FTP, FTPS, or SFTP protocols. They can directly upload, and it connects to S3, so your data will be in your landing zone. For batch processing, based on sizes, you have options like AWS Batch, EMR, or AWS Glue. AWS Glue has transformation capabilities. You might have heard about AWS Glue, which has over 250 transformation capabilities where you can do data validation, check for null values, replenish data according to your requirements, change the date format, or even change to Parquet format so that you are selecting the right data format.
If you have a change data capture requirement where you have done a full upload but have incremental changes every hour or day, you have an option called AWS DMS, which is a data migration service. You can replicate your data and use change data capture to identify how much data has changed, so you are only replicating the changed data into your data zone. There is also AWS AppFlow, which connects to other sources like SharePoint, other file storage, or Salesforce. AppFlow has over 50 connectors where you can connect to different SaaS applications, get that information and data, and land that data into your landing zone, bronze zone, or raw zone.
Data Transformation and Identity Resolution: Creating Unified Fan Profiles
Now your data is in the landing zone and is available. Generally, what people do is create multiple data pipelines. I am giving an example of one, but the recommendation is to work backwards in two ways. One is working backwards where you know exactly what your partners, customers, team, marketing, or sales team want in terms of metrics. You know how your dashboard will look, where you will store the data, and exactly what KPIs you need. Then you build the whole pipeline according to that by deciding what format, what data format, what cleaning is required, and if it is machine learning based, what feature engineering is needed. After that, you export that data.
The other approach is your transformation ready enrichment data, which you expose to your data scientists, which is like unknown information. You give it to your experimentation team and say what we can do and what is the art of the possible. Then you expose data that way. You can see here in the slide you have two zones: one is business transformation ready and one is analytics ready. We achieved this through that approach. Most of the common transformations I have seen, and I spoke about AWS Glue, which does the transformation, but in the pipeline we have seen very common identity resolution, and I want to talk about it later.
Identity resolution means you get data from multiple sources. You do not know the connection between them because one data is coming from a merchandise team, one is coming from ticketing information, one is from a team or league. How is it connected to each other regarding your fans? How are those fans the same or different fans? How do you realize that? Then you do incremental processing. You do the quality check, the QA check, that null data transform according to your requirement, Parquet format. So all these transformations you do, fan loyalty scoring is one, fan segmentation is the other thing.
These are statistical and mathematical changes. You use AWS SageMaker for those mathematical model runs and then move it to the curated zone. After that, your data is available for further processing.
The data can be processed further through your available APIs, fans, or applications, or sent to your BI team. Let me speak about the first key aspect: identity resolution. This is about how you realize that these fans are the same. Here's a real-world example: suppose I'm a fan and I purchase a ticket and watch a game. Then I go to another vendor and purchase a t-shirt, which is one of my favorite t-shirts. Then I go to another app and look for some video and articles. As a team or league, you gather this information, but how do you realize that these three interactions are from the same fan or the same person?
There's also the scenario where it might be the same person but a different person from the same family or friend group, so they are related to the same fan group. Identity resolution is normally done through standard matching, like checking if names and addresses match to determine if it's the same fan. However, suppose you're getting millions of records. Running information matching through normal processing takes time. In that case, you have two options: one is using fuzzy logic to decide and match, and the other is using a machine learning model.
You need to do fan profiling to find the relationship between fans and determine with confidence whether these are the same people or the same fans. This allows your marketing team or sales team to do a better job informing your fans about offers, tickets, and loyalty scores. The process involves ingesting and normalizing data, creating a fan profile, and then matching it so you have a fan identity register service. You can achieve identity resolution using two ways: either you use AWS Entity Resolution service, which is a managed service where you don't have to manage infrastructure, or you process it through EMR where algorithms are also available.
The recommendation is to first do the deterministic way based on matching rules like fuzzy logic. Then run the machine learning algorithm after that. The reason is that some information might be missing, so you get a probability that looks like these fans are from the same group or these two fans are the same. You inform that data, and if your confidence rate is high, you merge those fans with the confidence that these are the same people based on the information you have. This allows you to structure the data in a confident way so you can take informed decisions based on that regarding what you should do with the data.
Analytics Infrastructure Decisions: Lakehouse vs. Data Warehouse and the Hybrid Approach
After that, your data is ready for the curated zone. How you expose it for further analysis is important. Everyone is talking about machine learning and AI, so we have two options there. One is SageMaker and one is Bedrock. For normal machine learning capabilities, suppose you want to do segmentations or binary classification that says yes or no. You go to Amazon SageMaker, which is well integrated with S3. Your data is already ready according to that algorithm-aware data format. You connect, run your algorithm, get the data, and store it into S3, which is your curated zone. Your data is ready and you might have multiple pipelines.
The other option is if you want to get a summary of the fan, such as this fan is a loyal fan from the past seven years and has a bronze medal with this many points gathered. You can use Amazon Bedrock to get the summary result, store the data as necessary, and expose it to your application or third-party members. For generative AI, you may need a vector database. There are multiple options. I have shared OpenSearch, but you can also achieve this with PostgreSQL. Suppose you want to calculate the relationship between fans. You may need a graph database. In that case, Amazon Neptune is another choice.
One other option I want to talk about later, but I mentioned here Redshift, which is a data warehouse solution. Most of the time I'm seeing that customers are getting confused about whether they should go for the data lake, lakehouse, or data warehouse and what the differences are between them and which pattern they should choose. How they should expose it to BI reporting and clean rooms is important. When your data is ready, which I'm going to talk about later, for BI options, you can use any BI tools that connect to S3. We have QuickSight, which is AI-enabled nowadays.
For ad hoc queries you can use Amazon Athena where you can run SQL-based queries and get results. People are also sharing with their partners using AWS Cleanroom. Let's talk about one of the very important architecture decisions: lakehouse or data warehouse. What should you do? This is very basic, high-level information. If you have a strong data engineering team with data scientists who understand open source models, have basic analytics capabilities, and want to experiment with your data, go with the data lakehouse approach.
If you already know what you're doing with your known metrics, dashboards, and specific query aggregation queries you want to run, go with a data warehouse approach. If your team has traditional SQL BI skills, a data lake generally uses Amazon EMR and either Apache Iceberg or Hoodie. The question is how you achieve transaction and ACID properties on the data lake. You use Apache Iceberg and Hoodie to achieve ACID properties, which provides a transactional approach for you. With a data warehouse, these capabilities are already built in to achieve that goal.
However, suppose nowadays you want to also experiment and share your raw data with your data engineering team while also achieving data warehouse capabilities where you can directly connect to your BI tools. The best thing about AWS is that you can use a hybrid approach. You don't have to worry about choosing between a lakehouse approach or a data warehouse approach because Amazon Redshift runs in three modes. Redshift has a Spectrum mode wherein, based on ad hoc needs, you can create infrastructure, pull the data, run the query, and get results, or use other surveillance modes as well.
There are three examples where lakehouse and warehouse approaches work together. You're creating a data warehouse where you use Iceberg and Hoodie and get results. There will be multiple pipelines in your infrastructure, but three examples stand out. Suppose you're doing real-time analytics and engagement where you run queries on the lakehouse. Suppose you have historical data that is very structured; the warehouse approach is better there. Suppose you have multi-channel journey analysis where data sources are disjoint with multiple varieties and structures; again, the lakehouse approach is better.
You have unstructured data like images and videos; of course, the lakehouse is better in that case. Everything works, but these are the better approaches, and that's why I say the hybrid approach is always the best approach here. We talked about known and unknown. Known is easy because you know what you want to do and have a pattern. Unknown is like how this data, which is exposed to your data engineering team or machine learning specialist, could be used and what possibilities exist.
Creating AI-Powered Fan Experiences: From Content Platforms to Multi-Agent Architectures with Amazon Bedrock
Jake will share some insight here. Jake, over to you. So Sharini just walked through a few examples of how you can create data platforms using AWS and highlighted the flexibility of that. What we'll talk about now are the ways you can use AWS and the flexibility in our AWS AI suite to create new products and experiences for fans with AI-powered features. AI is opening up even more possibilities to deliver new experiences to delight fans more than ever before. Organizations can create new digital products that bring fans even closer to the sports, the content, and the data behind all of that which they truly enjoy today.
There's greater demand from the age bracket of mid-twenties through mid-thirties that enjoy stat integrations on broadcasts, so advanced analytics have been a huge success with a number of those fans in that age bracket. These AI features don't only look like stat integrations on broadcasts and your digital interfaces and applications, but these AI features can be running in direct-to-consumer apps as well. Indirectly, organizations can create new products, solutions, and tools that elevate the entire entertainment ecosystem with their partners because those partners, for example in sports, are your broadcasters, and the interaction point that most fans experience sports is through TV.
Those broadcasters are the first point of interaction and are there to explain what is going on. By providing them with tools and solutions that help them explain what's happening and arm them to better talk about all the AI-powered stats, we're directly catering to that age group that highly demands stat integrations. With generative and agentic AI, there's the possibility to create new features and improvements on the features and capabilities in your current offerings on your digital applications.
Our customers in media, entertainment, and sports are able to incorporate new agentic workflows and generative AI solutions into the solutions and offerings that they have today. Diving into some examples of how AI-powered solutions can create new capabilities on what they already provide today, we start by looking at content platforms. Organizations and customers are able to create ways for their fans and digital consumers to explore their media archives and content by creating AI-powered search capabilities through semantic search and video understanding capabilities. This opens up possibilities and new options for organizations that have large media archives to inject new life into their older content rather than having them sit on the digital shelf with no consumption.
As sports expand across the world and become more globalized, the localization and translation of content is more imperative than ever. Localizing and translating their content into the languages that fans understand is most important for the expansion of those teams and organizations across the world. With so much content being generated, whether it's recordings and video on demand, fans want to be able to find the content they care most about. By having automated highlights and clipping those highlights from longer forms of video, it creates a short-form experience where they can find highlights very quickly.
Taking a look at digital products such as applications for leagues and sports teams or online sites and subscriptions, we've talked about personalization and hyper-personalization. Incorporating machine learning methods for hyper-personalization within apps to drive greater recommendations is what customers are looking to do. Semantic layers such as chatbots and virtual assistants are providing customers and their fans a way to find information faster than ever. Fans no longer have to spend so much time navigating their interfaces; they can quickly ask questions for what they're looking for, such as events they want to attend.
Looking at how you can create interactive experiences for your fans, creating ways for fans to contribute content and contribute to the experience that most fans will view on TV is a great way to drive fan engagement. Fans want to be able to contribute generated designs or make contributions whether in the form of contests or other capabilities for them to participate in. We've talked a lot about partner solutions and tools. Looking at how they can elevate their entertainment ecosystems with their partners, we take a look at the broadcasters and how they can prepare for in-game and pre-game analysis with advanced stats and other talking points for them to highlight throughout the viewing experience for fans.
With the rise of AI agents, they're finding ways to access data, APIs, and other solutions even faster. So how can organizations and customers in media, entertainment, and sports create some of these types of solutions? With the AWS AI stack, we see this as a toolkit for customers to use to build their own solutions. We see this as a way to provide the infrastructure, the compute, and managed services that help them create new offerings.
AWS provides infrastructure to compute through different chips, and we also provide the capability and flexibility to train your own machine learning models through Amazon SageMaker AI, where organizations can build, train, and deploy their own AI and ML models. Looking at Amazon Bedrock, customers can access foundation models and their own large language models such as Amazon Nova and leading third-party models. Within Amazon Bedrock, there is Amazon Bedrock Agent Core, which allows customers to access the full suite of capabilities to create their own AI agents to put into different solutions and experiences. For those customers who don't want to spend all the time creating new solutions, there are managed solutions as well, such as Amazon QuickSuite, which helps customers find their information and insights faster and automate multiple workflows.
Let's take a look at how some of these AI services on the AWS stack can be used to create an agentic AI solution for fan experiences. We'll walk through an example where we are creating an agentic solution for creating and generating content based on games that happen within sports. Users and fans will access these through apps on their phones or devices. Using Amazon Bedrock Agent Core, customers and organizations are able to create these solutions using multi-agent architectures.
Agent Core runtime is used to orchestrate and facilitate the different workflows of all these different agents. In this example, we would have one agent performing data analysis on games, one agent performing image search within the media library, and another agent generating content and putting it all together. Agent Core runtime helps customers deploy and orchestrate these agents with any framework such as Anthropic's Claude, LangChain, and LangGraph. It provides the instructions and context for these agents to work together.
These agents share the infrastructure and components for memory management, gateway authentication, and runtime management. They can be directly invoked or communicate with each other via agent-to-agent protocol. Looking at how these agents can use other tools and data sources, we have Agent Core Gateway, which is a capability that allows these agents to access tools such as APIs, whether they be first-party or third-party data sources, or your own local data warehouse and storage solutions to create tools that these agents can actually utilize.
Here are a few key takeaways where we want you and your organizations to start incorporating within your fan experiences. Overall, elevate the fan experiences, and organizations can do this by incorporating hyper-personalization techniques and AI-powered features to create new experiences for their end consumers.
To better understand and interact with your fans and consumers, you can build robust data profiles with the modernized suite of AWS services to create a new modernized data platform. You can enhance your traditional data pipelines and platforms using AI and ML methods such as identity resolution and other ML methods on top of that.
Finally, you can utilize the AWS stack, such as the AWS AI stack and other modernized services, to build brand new products, experiences, and platforms for viewers and fans to enjoy.
Overall, we are excited for what you and your organizations will build that engages and excites your fans and consumers. Thank you for being here. We'll open the floor to questions. There are microphones in the back, and we'll also be around in the sports forum today.
; This article is entirely auto-generated using Amazon Bedrock.















































































Top comments (0)