DEV Community

Cover image for AWS re:Invent 2025 - Powering Autonomous AI with Trusted Data (ISV318)
Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - Powering Autonomous AI with Trusted Data (ISV318)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - Powering Autonomous AI with Trusted Data (ISV318)

In this video, Ansh Kanwar, Chief Product Officer at Reltio, discusses how their cloud-based master data management platform creates trusted data foundations for agentic AI. He explains Reltio's data unification approach that combines records from multiple sources into golden records, demonstrating real-world applications at CarMax and Warner Brothers. Kanwar introduces Agent Flow, their new agentic AI product built on AWS Bedrock and Agent Core, showcasing two demos: Agent Flow Resolver for automated data quality management and a product recommendation agent that leverages 76 data points across 23 attributes to deliver personalized suggestions in real-time call center scenarios.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Reltio's Data Unification Platform: From Master Data Management to Trusted Enterprise Knowledge Graphs

Thank you for being here. I am Kristen Hughes, and I lead a team of solutions architects at AWS in the ISV space. The best part of re:Invent for me is being able to hear about innovation from our customers. Today we have the honor of hearing from Ansh Kanwar from Reltio. He's the Chief Product Officer. As you can see, Reltio is revolutionizing enterprise master data management and is the first cloud MDM platform to seamlessly connect Agentic AI with a trusted data foundation.

As the Chief Product Officer, Ansh is a thought leader in agentic AI and is spearheading Reltio's transformation. His strategic vision has enabled Reltio to unify, cleanse, and enrich data across enterprise systems from legacy infrastructure to modern applications to data lakes. This creates the foundation for an effective AI implementation. Let's hear from Ansh on the route to transformation, their purpose-built agents, and how they are delivering real business impact with real-time, unified, trusted data.

Thumbnail 30

Thank you, Kristen. Thank you all for being here. It's a small group, so we can have a more intimate conversation. What is Reltio? What does Reltio do? One way to understand Reltio is through the master data management space. Reltio started as the premier cloud-based master data management provider and brought the idea of operational master data management to the forefront. What we're doing at this point in the market is really thinking about a higher-level concept called data unification.

The idea behind data unification is addressing the core problem we're solving. We're bringing together records from different sources and asking how we can select data attributes that have the highest level of fidelity. This gives us an output from the process of combining all these source records that is a higher-fidelity record in which you can place much more trust than the input. Over the years, this record—this idea of the golden record—has evolved significantly. Now we talk about trusted profiles, this idea about knowing your customer, your supplier, or a product. Essentially, any noun that your business cares about—how can you build a rich profile of that particular entity and keep it continuously updated as an evergreen data product that can be consumed in real time?

Thumbnail 90

Your energy is focused on pushing all this data across your enterprise first-party data. Reltio combines it into a graph that is then available for consumption on the other side. Both the input and output are available via APIs and events, so as near real-time as possible. The upshot of all of this is that now you have a trusted, governed layer of data—core data to your business. What is this data about? It's about customers, suppliers, products, and your financials. Anything that ultimately is meaningful for building business operations on top of.

Thumbnail 120

The picture here summarizes that with the idea of the knowledge graph of Reltio as this context graph in the middle of all of these enterprise operations. We've been fortunate to serve some of the largest companies in the world. We started off more in the regulated space with life sciences and healthcare companies. Our footprint now in financial services and insurance is fairly large. We are now seeing with AI this notion of this graph, which was trapped in the idea of master data, now really has become this underlying core data that powers more and more of agentic AI as it spreads across the organization.

Thumbnail 190

Thumbnail 290

That's really the context of these fifteen to twenty minutes, which talks about this agent's bright box on the left-hand side and what that means if you have a data foundation that is already built out of the nature that I just described. The best way to understand any product, in my opinion, is to understand what our customers are able to do with it. I'll tell you two quick customer stories. The first is CarMax. CarMax is the largest retailer of used cars.

They typically see a car come in and out of one of their showrooms three times in the car's life cycle. They also typically see customers or buyers multiple times, either buying or selling a car or vehicle through them. The highest level promise as a business to their customers is that the customer can walk in, close a transaction—whether that's buying a car, selling a car, or both—and then walk out within 15 minutes.

To enable all of this, they've organized data underneath which is able to support this business process. They think of three 360s, if you will—three 360-degree views. They have a 360-degree view of every vehicle that they can get their hands on, so they have a very large understanding and a large data set that helps their understanding of this 360-degree view of every vehicle in the United States. Very similarly, they have a 360-degree view of buyers, of adults in the US, and also they have a 360-degree view of their employees.

For them, a transaction is really bringing together a 360-degree view of a vehicle with a 360-degree view of a customer with a 360-degree view of their salesperson. Not only are they closing the transaction with the customer within 15 minutes, they're actually also closing the commission with the salesperson within that 15-minute period. To be able to deliver on this business promise, they use Reltio organized in the way that I described to be able to support all of this in real time across their US footprint.

Now for a very different kind of example, Warner Brothers uses Reltio to manage their IP assets. If you think about Daffy Duck or one of the cartoon characters, the character itself has quite a bit of commercial footprint. What are the merchandising deals that are associated with this particular character? What are all the movies or ads that this particular character appeared in? All of that is very critical to be able to then account for revenue against that particular IP.

Thumbnail 460

You can think of that as a product that is managed by Warner Brothers within Reltio as a graph—an interconnected graph of their characters and the movies and then their merchandising deals. Just two examples, hopefully, that illustrate the point that if data is organized in a certain way, it can power business processes in a continuous manner.

Agent Flow: Transforming Manual Data Management with Purpose-Built Agentic AI

Not everything is perfect in the world of data management. Despite being the most cutting-edge platform in this space, there are still a lot of operations that have to be performed manually. Some of these have to do with things deep in the data, like entity resolution—saying whether two things are the same or not. Our machine learning algorithms are able to resolve 85 to 90 percent of matching. However, there are still some percentage that gets handed to a human to get resolved.

For large companies, there is a role of data stewards that spend time making sure that this data is actually at the highest quality that it can be. That's a lot of manual effort. Data quality is almost a religious debate. You can spend a lot of money as a large enterprise getting data quality to a perfect level, but what is good enough? That really depends on the application. Getting to the right level of quality for a particular use case still takes a lot of manual effort.

Thumbnail 570

Of course, data models present another challenge. All of you deal with data landscapes that are diverse. You have different products that think of a customer with certain attributes in a SQL database and may think of that very differently in an application. The complexity of data models makes it really hard to get an enterprise-wide uniform view of what's happening across the data landscape.

We set out to solve this remaining problem in data management. How can we make data management for the largest companies in the world be an order of magnitude, two orders of magnitude better than it is today? And that always means cheaper, faster, and provably better. That is the context in which we worked very closely with AWS to be able to launch it.

Thumbnail 610

A new product and market is the bright yellow arrow here: Agent Flow. But before I describe that, let me just describe the base of the pyramid. This is the set of products that Reltio has in market: multi-domain master data management, which is all of your core entity data, and then Intelligent 360, which allows you to bring in a lot more information and connect it with that entity data to have this complete evergreen graph. Both of these products can then be augmented by agentic means, and that's what this Agent Flow layer is, which I'll talk about and demo here in a second.

Thumbnail 670

I also want to highlight the bottom portion, which is our data cloud, which runs on Amazon. This allows us to stand on the shoulders of giants and be able to really deliver world-class security, world-class compliance, and global data distribution at ultra-low latencies using a lot of the primitives that Amazon provides. Okay, so enough talking. Let's go through a couple of demos. I'll try to do this looking down here, and the video will play in the background while I give you the talk track.

Thumbnail 690

Thumbnail 700

Thumbnail 710

So what is Agent Flow Resolver? This is the Agent Flow interface. We can publish multiple agents in here, and the idea is a conversational interface that allows us to ask questions not in SQL but in simple English, like: we're experiencing service delays at our customers in our top 100 segment. We think it's because of data quality problems. We think it's because of duplicates. Please resolve.

Thumbnail 730

Thumbnail 740

The system has now gone through and through its chain of thought reasoning, it's actually proposed an approach. It said: I'm going to do some searches, I'm going to do something for each one of the organizations that I found, and so on. Here are some options for you to choose from. Now, of course, if you use Cursor or any of the other AI development tools, you know this is a very familiar experience. We're bringing that to data management, and in this case it's identified a couple of different anomalies in the data. Importantly, it's pulled data from the internet. One of the key requests we've received from our customers forever as a feature request is: look, there's so much data on the internet. Why can't we leverage that for better data management?

Thumbnail 760

Thumbnail 770

In this case, we're trying to resolve data around the Solar Turbines company, and you can see different columns where each one of the attributes has been organized. Ultimately, it comes down to a set of automated recommendations. We could do all of this in the background through an API, and we do, but this here is illustrating that data steward role that I mentioned and acceleration of that. Because somebody doing this research, that's easily a 35 to 40 minute task to go organize all of this data, and here it is at their fingertips.

Thumbnail 780

Thumbnail 790

A very big part of being able to deliver this is to be able to deliver trust in what's happening. Every step of the way, the tools that are used are listed and the conclusion or why the conclusions are being drawn is listed. What you don't see here are guardrails. Specifically, if I ask, you know, make me an omelet, it's going to say great idea, but I'm not the right agent for that. Being able to stay on task is very important, and we're able to deliver that in the data management context.

Thumbnail 840

Thumbnail 880

Let me switch over and show you a different agent. In this case, the setup is that there's product data that a company has in Reltio, there's their customer data that they have in Reltio, and a call is coming into a call center. We're agnostic if this call is being answered by a human or it's being answered by an AI agent. But the task is we want to recommend products to be placed within that phone call and what would those products be. So really it gets down to that personalization to an audience of one, which has been the holy grail for many years, and the power of data being organized a certain way makes it seem almost trivial.

Thumbnail 890

Thumbnail 900

In this case, we've selected a different agent: the product recommendation agent. What we're saying is there's a certain customer calling us, and we would like to produce three compelling product recommendations. Do your magic. The agent goes through and searches for that particular customer. Once it finds the customer, it's looking for multiple data sources that we have from our first-party data about the customer to really try to understand how they've been interacting with us.

Thumbnail 910

Thumbnail 930

Thumbnail 940

In this case, they've been interacting with us by purchasing certain things from us. They are enthusiastic about tech products. They've asked questions in our community as a retailer, and so definitely an engaged customer. Because of these signals, we know that they might be interested in buying drones from us. Here's a list of the three products that would be interesting to them. The why is most importantly, even if you produce a list of three, the important thing is to be able to defend why you recommended those, because that's the only way to improve these algorithms going forward. It does that by saying which data points it has used to make that decision.

Thumbnail 960

Thumbnail 980

Thumbnail 1000

Now we go further. We say, okay, this individual may be interested in buying something for their family. So what do we know about their family? Where do we have consent? And in using the correct information legitimately, how can we come up with more recommendations that are relevant to the family? In this case, we find that Sarah, who we searched on, has a spouse and has a child. There may be an interest where the child, a nineteen-year-old daughter, may be interested in an FPV racing drone. So it seems like it's a family hobby, and we've come up with a recommendation. Based on the conversation, now the agent can position one or the other, and they have full ability to justify why they're positioning this new product.

Thumbnail 1020

Thumbnail 1030

Thumbnail 1040

As we go through the rest of this, we'll flip over. It's also a little bit of communication strategy. It's worth pointing out that it's guiding the agent on how to communicate all of the data that was just produced by this agent. And then, of course, just to illustrate the point, we say how many data points did you use to generate this output. In this case, it used seventy-six different data points using twenty-three different attributes and multiple relationships. I think the relationships bit is super important and often overlooked if you have oversimplistic data models, because it is about the connectivity of these different entity types, how dense that connectivity is, and we can infer a lot from those relationships.

Thumbnail 1050

Building Enterprise-Scale AI Agents with AWS Bedrock and Agent Core

With that, I'm going to switch over to how we built this, which might be of interest to some of you. We leveraged all the bits that Amazon offers at this point with Agent Core and Bedrock and the agent building SDK, and being able to put that together to deliver the functionality behind the scenes means that we have a lot of confidence in the output that these agents are producing. We have a lot of trust because of our data layer. We have a lot of confidence because Agent Core is helping us with guardrails, helping us with memory, and helping us with multi-tenancy. A lot of this heavy lifting that we would have to do behind the scenes as an independent software vendor is very important for us to be able to guarantee these nonfunctional capabilities, and with Agent Core we're able to leverage a lot of that without having to build it ourselves.

Thumbnail 1070

Building these agents and being able to run them at scale is important because our customers are some of the largest companies in the world. Once they start using any of this, it gets used at scale, and therefore we need to have a framework that's backing us that is able to handle that. We're not going to run out of GPU cycles. We're not going to run into token limits and those sorts of things. So far, we've had a fantastic experience being able to build and deliver that on Agent Core. Some of the things that are coming out, especially this morning with the announcement around policies in Agent Core, I think just make this even stronger, and I'm very excited about the direction that the team is taking.

Thumbnail 1180

With that, I was able to walk you through some of the videos here, but I'd invite you to come actually see the demos and interact with the Reltio team. We're out there at booth 1227. Please come talk to us if you want to learn more about the things that I just touched upon. Thank you very much.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)