🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Revolutionizing Audi's Welding Inspection System through AI (IND367)
In this video, AWS and Audi present how AI revolutionizes welding inspection in automotive manufacturing through the Digital Production Platform (DPP). Matthias Mayer from Audi and Fabrizio Manfredi from AWS demonstrate two use cases: resistance spot welding analytics that analyzes 5 million welding spots daily using cloud-based machine learning, replacing manual inspection of 10,000 points, and weld splatter detection using computer vision with real-time edge inference under 20 seconds to guide automated grinding robots. The architecture leverages AWS IoT Greengrass, MQTT, Kinesis, and SageMaker, implementing principles like copy data once, composable design, and infrastructure as code for scalability across 50+ plants. Key learnings emphasize data quality, starting small, continuous refactoring, and bridging IT-OT cultural gaps. The partnership expansion aims to connect 120+ Volkswagen Group factories with 450+ use cases delivering significant cost savings.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Manufacturing Challenges and the Digital Production Platform Vision
Hello everyone. Welcome to where you will learn how Audi is revolutionizing their welding inspection system through AI. Thank you for being here today. I hope you had a pleasant journey to Vegas yesterday or on Saturday. Before we jump right into the topic, feel free to take a seat. We will discuss some general points about AI within manufacturing and the challenges that we are currently facing. Top 500 manufacturers lose 1.4 trillion dollars annually due to inefficiencies caused by production bottlenecks. As you see in the graphics, the cost of one hour of unplanned downtime has more than doubled in the past five years, from 2019 to 2024. This is a clear indicator of increasing complexity within production and also increasing production costs.
When we think about what we want from today's products, we really want them delivered fast, coming from years to months or depending on the product, even to weeks of delivery. We want unique value delivered to us. We want to feel that a product is made just for ourselves. This is why manufacturers are put under pressure to increase innovation while keeping high quality and still keeping production costs low. Third, we want smart, intelligent products. Especially as this is an Audi presentation, when we think about cars, we want a smart car. Sitting into the car, the car needs to recognize that we are the driver and everything is set up just for us. These things make it clear that something needs to happen in order to put AI into place within production and into the products.
When we think about how AI can help with all these challenges, first of all, we need to synthesize, meaning combine your information even if you have it spread across different departments. Try to centralize it as much as you could so that everyone is able to leverage it. The second one, and I really think this is the most important one, is automation. Try to streamline routine tasks and workflows on a daily basis and automate them as much as you could. Third, we need to focus on enhancing, which means trying to come to data-driven decision making. Use the data you are leveraging out of your processes and really come to data-driven decisions. Fourth, we need to focus on preserving. Do not rely on that one person in your company that has all the knowledge of one process or one product. Try to extract all of that knowledge from that person and make it centrally available.
There is one customer of ours, which is Volkswagen Group, that has realized all of these challenges five years ago already when we started our joint journey for the digital production platform, or in short the DPP. What was Volkswagen facing at that time? They were facing increasing complexity, decreasing efficiency, increasing production costs, and a fragmented landscape across their production systems. That led to our vision of a centralized platform with centralized data management. Meaning connecting all of their 120 plus factories to the cloud, leveraging and extracting the data out of it, and building smart use cases on that platform. This leads to decreasing complexity, increasing efficiency, and decreasing production costs and helps harmonize the IT landscape across the factories. As I said, we are going that path together now.
Over the past five years, we have built joint principles together. We exchange on our cultures because, as you can imagine, AWS and Volkswagen don't share exactly the same culture, and we build smart technologies. You'll see two wonderful examples of this today. That's why I'm happy to have with me today two wonderful colleagues: Matthias Mayer, who is a manufacturing planning expert at Audi AG. He has over 20 years of experience in automotive and has been with Audi for 15 years. Over the past 6 years, he has been focusing on OT IT technology and AI development at Audi. We also have Fabrizio Manfredi, a Principal PS Cloud Architect within AWS. He has been with AWS for 6 years now and supported the DPP development from the beginning, so he knows that platform like no one else. He is really the expert for cloud architecture when it comes to smart manufacturing.
Audi's Production Landscape and the Body Shop Challenge
I'm Verena Koutsovagelis, a Customer Solutions Manager at AWS and also the alliance manager for the DPP program. I have been with AWS for 3 years now and I'm helping my customer Volkswagen to accelerate their cloud adoption journey and execute on cloud migration projects. So I'm happy to have you two here on stage with me. Thank you, Verena. I'm Matthias, and we start with the company presentation of Audi, with a little bit of insight into Audi that maybe you don't know. First of all, we are part of the Volkswagen Group, and we are the leading brand of the Brand Group Progressive at the Volkswagen Group. It includes Lamborghini, Ducati, and Bentley in that group, and Audi is the leading brand.
Looking at the numbers from 2024, we have delivered more than 1.6 million cars to customers across Audi, Lamborghini, and Bentley, more than 10,000 Lamborghini and Bentley vehicles, and more than 50,000 Ducati motorcycles. We have more than 88,000 employees all over the world, an operating profit of 3.9 billion euros, a return on sales of 6 percent, and a net cash flow of 3.1 billion euros.
These are our production sites all over the world. The headquarters is located in Ingolstadt, and we share some plants with Volkswagen, for example in Bratislava and Tsuka. We also have brands in plants in Brazil and Argentina. The newest Audi plant is San José Chiapa in Mexico. You can see many plants all over China as well. These are partnerships with our partners in China, Pyke and FAW. The plant where I'm located is Neckarsulm. It was founded in 1833, a really long time ago. We started with stitching machines, and then NSU was created as a brand. Later, out of NSU came Wanderer and Audi, and these are the four wings of Audi, the four brands that started the Audi brand. Last year, we had a production of more than 130,000 cars at that plant. Our plant manager is Fred Schulze. We have more than 50,000 employees, and the models A5, A6, A7, A8, and the e-tron GT are produced at Neckarsulm.
We have a strategy in our production of pushing the limits of innovative and sustainable production. I'm at the department production lab, so we create new technology for our production environment from the beginning, from development to series production, and then hand it over to our planning department, and they bring it to every plant all over the world. I'm responsible for the body shop at the innovation department. Let's take a closer look at the body shop from the A6 and A5 production line. We have approximately 1,000 cars a day on that line. We have around 1,200 robots.
In that body shop we also have 800 welding guns. Each vehicle receives 5,500 welding spots, which adds up to over 5 million welding spots per day. Resistance spot welding is the main joining technology in the body shop, so we developed two use cases for this technology. The first use case is resistance spot welding analytics, and the second is weld splatter detection. Let me focus on the resistance spot welding analytics first.
Resistance Spot Welding Analytics: From 10,000 to 5 Million Quality Checks
We created this use case together with our partner AWS. We built the entire infrastructure from the shop floor to the cloud, which we call the Digital Production Platform. Before this use case, we did not have infrastructure connecting the shop floor to the cloud. We discussed extensively with our security departments about how to build this infrastructure and how to bring 5 million welding spots per day up to the cloud and the dashboards back to the shop floor. Since the shop floor is a very large building with limited internet access, we needed to bring the data to shop floor PCs. We created an environment not only for our customers in the body shop but also for data scientists to learn machine learning models and deep learning models. We created many dashboards on the data for our customers, including a quality dashboard for the quality assurance department and a maintenance dashboard, among many other dashboards and reports built on the data. The architecture we built is very scalable.
Before this use case, the quality assurance department checked around 10,000 points per day using ultrasonic devices. They checked random samples from the production line, examining approximately one car with 10,000 points per day. When we picked random parts and they were good parts, there was no problem. However, when we had a mixture of good and bad parts, we only picked the good parts, so we did not find the faults in our factory. When we had only faults, we certainly found them. This is why we created this use case. The quality assurance was based on checking 10,000 points, and now we base our quality assurance on 5 million points. We check not just 10,000 points but 5 million points via AI. This is what it looks like: we have our algorithm and our dashboard positioned between the quality assurance and production. The dashboard shows how the process runs, how it progresses, and where the defects and problems are. If we have a bad part, we can see it all on the dashboard. Now it is not a static process; it is dynamic, so I can react and identify where the pain points are.
In the past, as I mentioned, we validated 5 vehicles per week, which was one car per day. In the validation phase, we ramped that down to 3 cars, and right now we have only 1 car as a static process. The static process must remain static because of our quality assurance requirements, while all the other checks are dynamic. This is mainly the explanation of the AI model. We have our data from the machines. As I mentioned, around 800 welding machines in that production line send the data up to the cloud. We have the resistance curve and the stability in that data. In one welding spot, there are around 400 values. We have the parameters, which are the reference for the welding spot, and we have the domain knowledge from our experts about how a welding spot should really look.
So how should a welding spot really look like or a resistance curve look like? We trained two models: one regression model trained on the ultrasonic checks that gives us the ultrasonic check result based on AI, and we have an anomaly detection neural network that gives us anomalies in our curves or in our data. Now we have a deeper look at the architecture, and this is presented by Fabrizio.
Cloud Architecture for Near Real-Time Welding Data Processing
As we have seen, we are going to understand the architecture better through functional components. As we mentioned, all the devices are on-premises. There is a factory, and we are collecting data from the welding controller. As has been already mentioned, we have 5 million welding spots. Each welding spot contains up to 50,000 data points because the collection of the core for the three most important curves—resistance, voltage, and pressure—is collected at a single millisecond interval.
Then we have a welding spot that is more than 5 seconds, which means 5,000 points for each curve. On top of the 5 million, you have to multiply by sometimes 25,000, and then we are dealing with billions of welding data points that have to be collected and stored. The welding controller is collecting all this information. The welding controller is the machine that collects and controls the welding guns on the robot and has all the information. They push the information through MQTT to an edge gateway.
You can already see the typical structure of manufacturing that is divided into domains. Each manufacturer has different domains that are separated, and especially the production line has strong constraints against sending data to the internet or talking directly to many other components. What we have introduced here is an edge gateway, which is an intermediate element that is able to communicate with the internet. It collects the data from the welding controller and then sends the data to the cloud. At the same time, we had to collect other data that is present in the system, as has been mentioned about ultrasonic data to check the quality result that is around in some silos system in the shop floor or eventually at the IT level.
For that type of data that usually changes not so often—one car per day—or to know the material that is used, we do not need real-time communication. We do not need MQTT, and then we collect it more as a batch file. We have a dedicated component inside the gateway that manages the file and uploads it in batch mode. Now this is collected. This is the edge gateway functionality, optimizing the collection and then sending to the cloud.
The first one is MQTT, and then we send the data. We discovered immediately that it probably was not a good idea to transfer these billions of data points through MQTT. For this reason, we use MQTT mainly for control, getting alarms, and getting everything that needs real-time action, but not the telemetry data. The telemetry data, we optimize the transfer through streaming, and then we started to categorize the data transfer. The first one is real-time MQTT, telemetry streaming data, making some batching inside and packaging to reduce the number of calls and the number of packets to transfer to the cloud, and the last one is file, typical file upload that happens on a time schedule or on some changes.
Then we identified the pattern and the optimal way to transfer the data to the cloud. As a second step, we store all this information in a data hub. Today we are migrating to Data Mesh. Volkswagen Group and Audi have a large project around data mesh, but mainly to put data in a data hub to make this data transfer once and use it by multiple use cases. Then the idea and the architecture was to be reusable and extendable. How to do that? Mainly by following one of the first principles: transfer once and decouple consumer from producer. This you can see immediately in the diagram. The different optimization and then the decoupling between the consumer and the producer, which in this case is the machine learning part.
Then we implemented an envelope where the data scientist is able to access the data and perform experimentation, making all the adjustments that the data needs to properly train a model and then perform the training. After that, we deploy the model and run the inference. The result of the inference has a visualization layer that is fundamental for the operator on the line who needs to know if something goes wrong. This operation was not needed in real time because the car takes some 5,000 points. Usually each station takes 1 minute, so you have several minutes to evaluate the car. Therefore, you don't need a single sub-millisecond answer about the inference. This is why all the inferences are in the cloud. We will see that in the next use case the story and situation is a little bit different.
Now let's see how it's implemented. We know the functionality, the concept, what we have used to decouple, and how we want to scale. We have a factory with all the robots that stay in a dedicated network. They send this information to an MQTT broker, usually in that separate network we call southbound, which is still not a network that is able to talk to the internet. It's also an intermediate network where you have other databases. This broker is pushing to the edge gateway that is usually positioned in what we call northbound, which is the component in the area network that is able to go to the internet or talk to the cloud. In this case, we are not using public internet but rather a private direct connect.
The edge gateway is based on AWS IoT Greengrass, which makes the division and categorization of the messages into three parts. The first part is MQTT going through the MQTT broker that is installed inside Greengrass. The second component is the stream manager, which is the Greengrass component that is able to stream and also package data in a way that allows you to have parameters to optimize the transfer to the cloud. Then we have also a component that is able to upload files. When you collect it, Greengrass is pushing this data in the cloud to a dedicated account that we typically call the ingestion account.
In the ingestion account, you have different interfaces. For the streaming interface, we are using Kinesis. We have Kinesis Data Streams for input and Kinesis Data Firehose for output, which means we also do some batching in that case. We do a little bit of small batching, micro-batching on Greengrass and batching on Firehose, which means we are able to transfer and dump the data with a delay of 20 to 30 seconds. The second interface is IoT Core, which is mainly the latency of the network, usually a couple of hundred milliseconds.
Now all this data is dumped outside of the account and you can immediately realize that we are using a multiple account approach. There is the account for the data hub, and there is a dedicated bucket where we store the data and we process the data because we also optimize the structure to have better performance after that in the machine learning part. This is central and is used by multiple use cases, not only for this use case. This data will be accessible to the dedicated data scientists account where the data scientist through SageMaker is able to operate and create the machine learning model and perform the training.
The training of the model is automated. Assuming that there is a better model, because this training is constant—in our case weekly—to see if there is a better model, the model will be deployed in the application account, which is the real use case. The application account has an inference that reads the data from the data hub. You can recognize that in this case we don't duplicate the data. The data is copied once, which is one of the principles of the architecture. On top of that, the result of the inference is stored in our case in a time series database that is used by the front end for all the visualization.
In case of a specific anomaly that has been detected, there is a Lambda function that is able to run alarms directly to the shop floor and notify the operator that something is going wrong. The operator from the shop floor will be able to read through a proxy the dashboard that is created in the cloud. All the computation is in the cloud, and all the infrastructure is in code. We are using infrastructure as code because we wanted to automate and followed DevOps best practices, but also to make it super easy to implement in multiple plants. Each plant has their own use case implemented and managed.
It is not possible to coordinate in a multi-tenant environment because each plant has a different lifecycle, different models, and different maintenance windows. Therefore, it is quite impossible to coordinate with a single instance. Additionally, each plant has their own responsibility. Each plant has their own instance, but thanks to infrastructure as code, everyone can install the use case in a couple of minutes. Now we are at the end of this use case and will start with the second use case, which is computer vision. Thank you, Fabrizio.
Weld Splatter Detection: Computer Vision at the Edge for Real-Time Action
The next use case is also resistance spot welding but a little bit different. It is a computer vision use case. Now we have structured data, but now it is computer vision. It is AI-based and we plant action and what it means I will explain to you now. This is for the shop floor. We have a lot of robots working on the car body. You see the explosions. This can be well splatters. This is not fireworks. This is not New Year's Eve. No, this is the shop floor. This is reality. It is not a perfect world. The metal sheets are not perfectly fit together every time. We have such explosions, and this is mainly a fault, a little fault on the surface of the car body. This is how it looks like. Sometimes it can be a problem when it looks like this, and we need to remove that.
On the left side, on the right side, you see a station is a work station, and we have specific areas where we cannot have well splatters. We need to remove them. This is why they can damage wiring in the car body. There is a risk of hurting our employees by connecting those wires or clicking those wires. It is dirty and corrosion can also be a problem at the car body, and this is costing money for us, so we need to remove that. This is how it looks like today. Our employees have this station and they have a grinding process in it, and they grind the surface of the car body. It is a significant effort for us.
It is not planned that we remove those well splatters. The results are not replaceable from every employee. It is not always the same employee that works there and knows how to remove them. As we mentioned before, we have a cycle time of 60 seconds, so the car body moves in that station, it turns around, and is presented to the employee. He has only 20 seconds to inspect the whole body, and in that body in those specific areas we need to remove the weld splatters. There are around 500 welding points and he needs to inspect 500 welding points and remove that in 20 seconds, then push the button and the car goes to the next station. That was really a bottleneck for us.
We decided on ergonomics that we need to remove them on top of the body and on the ground, so it is not good for our employees and it is a very, very, very dirty job. So we decided to use AI. It is a little bit different to the last use case because we need that information from the AI model in the PLC and not in 5 minutes or in 10 minutes. Right now we have 20 seconds to run that model, to give the result of the model straight in the PLC and then guide our employee. It is not picked by light, it is grinding by light right now.
The model gives us the result to the PLC and the flashlight goes on so the employee knows where to grind. We don't need to look at 500 points anymore. We have specific areas, and right now the employee has to look at about 100 or 200 points, depending on where the well spreaders are. This is mainly a big goal for us to save that time.
This is how the station looks. This is the station before the rework area. A flashlight goes on right now, we take the picture, the car body moves in, we take the picture, and the robot starts to work. We have 20 seconds for the result to the next station.
This is what such a picture looks like. Can you find this letter? It's really hard for a human to detect the defect splitter. There it is. As you can see, think about this: you have 20 seconds to detect such small pieces on the surface of a big car. It's an A6, a large car, so the employees run from one side to the other side and check all the points.
The architecture is a little bit different, but the development of an AI model is still in the cloud. We develop in the cloud and train the model in the cloud. Then we have management on Edge, so we deploy the model on Edge. The inferencing is on the shop floor on a dedicated industrial PC with a GPU. In this specific use case, we have 8 cameras with 20 megapixels, so we have huge pictures of around 50 megabytes, and the model needs 20 seconds. It works very well with that industrial PC on the shop floor, and the direct connection to the PLC is also very easy at that level on the shop floor.
The next step for us is to implement robots, not to guide the employees, but to guide the robots to the specific points. This is what we built up right now. This is from the simulation in Ecosur, maybe around Christmas time. When all components are there, we implement that robot and beginning of next year we will automatically grind the surface of our car body.
The architecture is a little bit different, and Fabrizio will explain what the difference is. The critical point and critical difference between the two cases is that real-time has been required. In real time with a target to stay in less than 1 second for the development of fully automated control with the robot controlling the position and the cleaning, not only spotting with the light, but really controlling the arm that is going to clean up. This will also drive several other new concepts on how to position the tool and how to control the robot to go on the cleaning part.
Again, we are in the factory. In this case we are talking about cameras. We enrich the southbound part with new devices. Each device will run the inference in this case. Everything is controlled again with MQTT. You start to see that we are evolving in the architecture. The previous architecture was designed to be a composable architecture. Now we are evolving and attaching new pieces, and MQTT is the backbone of all the information and the control part. Through MQTT we collect and control the action. Some actions are driven directly by the Edge. In this case, the PC is sending a command to the Edge runtime. The Edge runtime knows that they have to take a picture. Then they can start to analyze the picture. All this information is moved to the Edge gateway that is collecting all the actions that have happened, but also all the images that have been analyzed.
The images are analyzed and collected because we need to train the model. The model then stays on the edge level, and we will see it in the cloud. What has changed is that this happened after more than one year compared to the first architecture. The edge gateway remains in the northbound, but now we want to be elastic also in the cloud. We have seen in the previous architecture that we used it mainly serverless in the cloud to have elasticity and to be able to scale based on demand. We have seen that we differentiate the components to be able to scale only the components that need scaling. Now we need similar capability in the plant, and this is why the edge gateway has been moved from an industrial computer to a Kubernetes cluster. The Kubernetes cluster is running containers for each component and for each data transfer that will be done from the edge gateway.
MQTT, as I already mentioned, is the main backbone for exchanging information and for controlling. Then again, we have a control plane based on MQTT going to AWS IoT Core, and we have the upload of all the images to S3 buckets. Again, these three buckets stay in the data hub, which means other use cases can use these images for other learning purposes or for other use cases as well. Nothing has changed on the data science side. The data science team will use the same tools, but now they will have access to images and can build models for anomaly detection, or better for computer vision anomaly detection to check if there is a defect or not. The system is the same, and the deployment from the data scientists is the same.
What has changed is in the application account. Most of the infrastructure that we built previously has been reused and extended in some cases, but it remains a perfectly composable architecture. The difference in the application account is that it is a different account from the one we saw before. Each use case has a dedicated account again for reducing regions plus radius. In this case, to deploy the model through the local inference endpoint in the cloud, it triggers operations to deploy on-premises using packaging. The model is packaged from the dedicated edge gateway and uses MQTT to trigger all the actions needed for the deployment and to control that the deployment has been successful on-premises. This is the new part of the application, and then all this infrastructure has been implemented through infrastructure as code because it has to be reused by multiple plants and has to be easy to replicate and scalable across plants. This is the functionality and the difference between the two use cases using the same components, maintaining the same principles, but evolving the infrastructure from near real-time to real time.
Lessons Learned and the Future of AWS-Volkswagen Partnership
Now, a little bit of conclusion and some learning because it was a long journey and we learned a lot from both sides. We learned so much in the use cases, and the fundamental of every AI use case is data quality. In the beginning of the resistance quad welding analytics, we had a lot of challenges. You need to explain to the data scientists 400 data points from each welding spot. That is hard work, and they need to understand what the process is and why there are faults and why not. In the well litter use case, with pictures, you have seen the pictures and you have several experts on the shop floor, and they see different things in the pictures. When you train a model with six different meanings, the model gets confused, so that is not good. Then you have accuracy of, I do not know, 30 percent, and this is not what we want. We want a model that has higher availability and high accuracy. That is why data quality is the fundamental of every AI use case. We have also simple and silly problems like the object having a different names in different plants or contain different value measurements because most activities have been managed by humans, sometimes on paper with manual labels.
This results in labels that differ when reconciling across all systems, which was also a challenge.
Start small and go fast. We started with one welding gun, and if you connect one welding gun or 1,000, it takes nearly the same time. Start small, focus on one, and do it well. Once you've done this one time really well, scaling to 10,000 is not a problem. That's why we designed for scaling. This principle has been used in general and in a special way for this use case.
First, we have a principle that says always copy the data once to avoid overloading the devices. This might seem silly, but it's not just about cost. When you're in the factory, most of the time the device is already overloaded, making it difficult to extract data. If you want to extract data multiple times for multiple use cases, it becomes impossible. Everyone on the shop floor who is in charge will block you. Therefore, the system has to be scalable and optimized in resource consumption. This is why our first rule is copy once.
The second principle is that the system has to be composable. This means you have to be able to plug in and remove or refactor components without impacting other components. This derives the third element, which is a mindset. Continuous refactoring is required, and DevOps implies continuous refactoring. However, this goes against the traditional manufacturing mindset where you design something and run it for 10 years without touching it.
The idea of changing something that is running production is not common in manufacturing. Rebuilding something that has been done is sometimes perceived as a wrong design, but it's not. When you start small, at a certain point you need to rebuild or refactor some components. We started with the transfer of telemetry, but we knew it needed to be replaced at scale eventually because we needed to move billions of data points. However, for one welding gun and to experiment and validate the use case, it was fine. Sometimes it's more important to have speed than the perfect solution because the solution will evolve. You cannot answer questions you don't know or solve problems you don't know.
This mindset came from IT but is not common in production. At the same time, IT does not have a strong concept of production constraints. You can bring major changes into production only two times per year because the plant has to be stopped. No one can risk millions of dollars in line stoppages. This is a compromise and a mindset that we had to learn and that others will probably learn as well.
Bringing business and IT together was a real challenge. Our people on the shop floor don't speak the same language as cloud developers. Bringing these two worlds together required effort. You need to talk about the same things, and most of the time they mean the same things, but they say it in a different language. For operations on the shop floor, every 60 seconds is critical to build a car. That's the KPI. In the end of the day, we need to build 1,000 cars a day. When we're struggling one minute, it's only 999, and we need to report why it's only 999 and not 1,000. 1,000 is our goal every day. This is a real challenge, but I think we're doing it well. We developed two use cases.
Now I'll hand over to Verena to give you more insights about the DPP. Thank you both for those interesting insights into these two use cases. I hope you recognized some of the elements we discussed at the beginning, like automation and leveraging data to make use of it.
As you might have seen in the news, we managed to expand our partnership with Volkswagen. AWS and Volkswagen will continue on the same path for the next five years with DPP, and we will certainly advance and enhance the platform as you see here with the different layers. DPP consists of different layers, and in the next coming years we will focus on the integration layer to enable seamless exchange of data and communication. We will also focus on the experience layer, building smart and intelligent use cases like the ones you've seen today, and we want to have even more of them.
We will certainly also focus on data management, really trying to extract all the data from the shop floors and factories, and connect more plants to the cloud. We have had some successes in the past already. We have 50 plus plants connected to AWS, and we certainly want to have more, like 120 plus plants, so we have quite a number ahead of us. We established a connectivity backbone, which you might have seen in the architecture diagram that Fabrizio presented, leveraging MQTT broker and so forth. This is what enables us to quickly connect to the shop floor and extract data from it.
We built a central data management with SageMaker Unified Studio. We have 450 use cases across the different plants running, and because of these running use cases, we managed to receive several cost savings until today. There is not much more to say from my side than just ending with the slogan of Audi. I hope you can hear it because I'm not able to hear it, but it means advancing through technology. I hope you gain some inspiration from our session today. Really try to be ahead of your competitors, leverage the data you get, and really try to build smart solutions and use cases.
If you have any questions, we still have some time. We won't answer them here on stage, but feel free to reach out to us afterwards. Thank you for listening, and please make sure you complete the session survey within the mobile app. Thank you very much.
; This article is entirely auto-generated using Amazon Bedrock.


































































Top comments (0)