🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Concept to campaign: Marketing agents on Amazon Bedrock AgentCore (AIM395)
In this video, experts from AWS and Epsilon demonstrate how Amazon Bedrock AgentCore services solve multi-channel marketing automation challenges through agentic AI. The session covers AgentCore's modular services including Runtime, Identity, Gateway, Memory, and Observability, explaining how they enable flexible agent deployment with any framework (LangGraph, CrewAI, Strands) and any LLM. Key architectural patterns are explored: agent-to-agent communication via Boto3 or A2A protocols, security through IAM or OAuth authentication, multi-tenancy approaches, and cross-account deployment strategies. Epsilon showcases their real-world implementation with 20+ agents for audience segmentation, campaign creation, and performance optimization, achieving 30% reduction in campaign setup time and 20% increase in personalization capacity. The demo illustrates how their AI agents automate campaign briefs, content generation, branding consistency, and dynamic personalization using Epsilon's 200 million consumer identity database. Technical deep-dives cover MCP server integration, AgentCore Gateway for semantic tool discovery, session isolation, and OpenTelemetry-compatible observability across CloudWatch, Datadog, and Langfuse.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Multi-Channel Marketing Challenges and How Agentic AI Provides Solutions
Before we get started, I want to touch base a little bit about what multi-channel marketing is, what challenges are associated with it, and how agentic AI can help solve those challenges. Multi-channel marketing is a customer-centric marketing strategy where you provide seamless integrated experiences to your customers at the touchpoints where they interact with your brand. At the same time, you need to ensure that there is consistent messaging across all the channels through which they interact with your brand.
Let's look at some of the challenges that come with automating multi-channel marketing and how agentic AI can help. I will cover what agentic AI means, how we evolved into agentic AI, and all those details in the upcoming slides. First, let's look at the business problem. The first challenge with multi-channel marketing automation is associated with the workflows that you need to orchestrate. There are different customer journeys, multiple channels, and different touchpoints for your customers.
How do you ensure that you have this overall workflow deployed in a way that is flexible? With traditional workflows, they are too rigid and stick to a particular script. With agentic AI-enabled workflows, you have the flexibility to have multiple agents coordinating with each other in real time. You can have specialized agents built for specific marketing channels. You can also have dynamic workflows where if your customer responses and behavior are different than expected, the agentic AI-based workflow can adapt based on your customer behavior.
You can also have cross-channel context preservation and deliver consistent messaging across all the channels you are trying to reach them on, whether it be email, SMS, online web personalization, social media, and so on. Then there is the challenge of real-time personalization. Whatever messaging you are delivering to your customers needs to be personalized to those customers. When you have thousands of customers across multiple channels at that scale, agentic AI can definitely help. Traditional systems usually lack the ability to deal with that kind of complexity.
With agentic AI-based systems, you can have intelligent content generation. Depending on the particular customer segment and customer behavior, you can dynamically generate what specific content makes sense for that particular customer and that particular channel. You can also do real-time decision making. For example, if the customer is interacting more through email or more on social media platforms, depending on the kind of interactions you are seeing and observing for your customers, you can switch between different channels as necessary.
You can also have brand guardrails baked into the AI agents themselves. That way you have brand consistency and compliance already built into the AI agents and you do not have to worry about it. The next challenge is with data silos. In any typical enterprise, you see data spread across your CRM systems, your e-commerce systems, your data analytics systems, and so forth. Looking at all this data in a holistic way and acting upon the customer data that you have across all these systems is quite complicated.
Where agentic AI can help is build that unified data point of view. You can have APIs and MCP servers that are exposing the data from these individual systems, and agents can actually get all the data from these different systems associated with a specific customer and identify their behavior and the patterns associated with them. You can also have things like data cleansing. There may be certain places where you need to cleanse data coming from your CRM system or some other system, or you may have to do some data mapping between different systems. All those things you can automatically achieve using agentic AI.
Accelerating Campaign Agility and Measuring ROI Across Touchpoints
Then there is the speed to market and campaign agility aspect. If you look at traditional marketing campaigns, they take a lot of time to set up, whether it be the content generation or thinking about how to customize it.
Customizing campaigns and identifying target or segmented audiences takes a lot of weeks of effort. By the time you finish, you would miss market opportunities. With agentic AI, you can have natural language-based campaign definition. You can say you're trying to target an audience that is under 25 years of age, uses Instagram or other social media platforms frequently, and lives in California or a specific city. You just provide the campaign definition in a natural language way, and the agents will understand it, identify the right audience, and identify the right content that needs to be generated.
You can also have instant campaign optimization. When your campaign performance data streams in and you want these AI agents to look at what is happening, you can optimize in real time. You can have that real-time campaign optimization in place. One of the biggest challenges is attribution and ROI measurement across all touchpoints. You have emails going out, SMS messages going out, and customers interacting with your brand at a store. There are many different touchpoints where your customers interact with your brand. With agentic AI, you can have intelligent attribution modeling that accounts for all these different touchpoints and all the complex customer journeys across them.
You can also have real-time ROI optimization so you can see which particular channel is performing better versus other channels. Let's say your marketing campaign is targeted to send 10,000 emails and target the same campaign on a social media platform, but you're seeing more responsiveness on the social media platform versus email. You can quickly switch between channels so it's much more optimized and you're spending the right money on the right things. You can also perform predictive analytics. Even before you execute your marketing campaigns, you can start measuring or estimating what the campaign performance would be if you execute across these channels and target these particular audiences.
The Evolution from Rule-Based Systems to Agentic AI
I did promise that I would cover what agentic AI means in the first place and how we evolved into agentic AI today. On the right-hand side, you'll see low agency systems. These are traditional systems that are completely rule-based and require a lot of human oversight. On the left-hand side, you see high agency systems which don't need much human oversight. Traditionally, companies have usually started by using rule-based RPA, or robotic process automation, which are rigid in nature and task-oriented. You specify exactly the task you want to accomplish, provide the set of rules associated with that task, and they can only focus on that task while requiring a lot of human oversight.
Then came generative AI assistants where they know what they need to do for specific tasks and humans are not significantly involved, but they still need some level of human oversight. You can say this is the task you're supposed to do, and they will do it for you. Examples include customer service chat assistants powered by AI behind the scenes or intelligent document processing where you can read through a document, and it goes through the document, understands what it contains, and completes the task. It understands what tasks need to be done and will do it by itself, but you still have to define the workflow and tell that generative AI assistant exactly what it is supposed to do.
Then came goal-driven agents, which are the next level, and this is where we started seeing real business transformation. These agents understand what the business objective is. You're not defining individual tasks anymore. You're basically saying this is my business objective, now please go do this.
These agents are able to come up with intelligent tasks that need to be executed, the workflow, the order, and so on, and execute those and accomplish the objectives that you are trying to achieve. Then there is fully autonomous agentic systems. These are kind of rare today. We don't have a lot of them, but this is where the highest level of agency and autonomy that we are working towards exists. These systems can make strategic decisions on your behalf. They are not just business objective oriented. Now they are able to make decisions on your behalf, interact with other agents, interact with other tools, and make, let's say, a payment on your behalf or those kinds of things.
So what does an AI agent mean, right? You may have heard about or been using LLMs, large language models or small language models, where they lack the ability to solve problems. You say, "Can you write me a poem?" They'll write you a poem, but they don't exactly know if you say, "This is my objective, go do something." They will not be able to do it. You have to specify what needs to be done. Whereas AI agents are autonomous or semi-autonomous software systems that are able to independently plan and act on your behalf with minimal human oversight, and they are also able to interact with the environment that they are in, whether it be the APIs, the data sources that you have, and take independent decisions on their own.
Amazon Bedrock AgentCore: Taking AI Agents from Prototype to Production
It is relatively easy to build an agent. For those who have already built AI agents, you already know how easy it is to build AI agents. But when it comes to taking them to production, think of all the things that need to happen when it needs to go into production. Think of the thousands of invocations that will potentially happen for that particular agent. Think of the thousands of interactions that this agent will have with other agents and other tools as necessary. All these complexities are in place, and if you have to manage your own infrastructure for this, they're not going into production.
This is where most customers are building agents and doing prototypes but not taking them to production. That's where Amazon Bedrock AgentCore services come into the picture. AgentCore services let you build, deploy, scale, secure, and observe your AI agents in production. You can use any agentic framework. If you're familiar with LangGraph or CrewAI, Strands, all these different agentic frameworks, you can use any framework. You can also use any LLM, right? You can use Bedrock-based models, whether they be cloud models and so on. You can use OpenAI models. You can use Gemini models. You can use anything that you want. That's the flexibility that AgentCore services provide you.
AgentCore is a set of modular services which you can use all of them or some of them as necessary for your particular situation. You don't have to use every service out there. You just pick and choose the ones that make sense for your particular agentic AI application. The first one of the services is AgentCore Runtime. Runtime allows you to run your AI agents and MCP servers. It is a completely managed service. You're not managing any of the infrastructure. You're only purely focused on what your AI agent needs to do and the code associated with it, and you deploy it in there. It is secure in nature. It has its own session isolation and other capabilities. There are data protection capabilities and all those things that come baked into the runtime itself. You can have real-time interactive agents or long-running agents, agents that may be running up to eight hours in duration.
AgentCore Identity, Gateway, Memory, and Observability Services
Then there is the AgentCore Identity service which allows your AI agents to securely interact with other services, whether it be your enterprise APIs or other services like Salesforce, Slack, or Google. All these external services as well can be accessed securely using AgentCore Identity. It basically reduces the authentication and authorization lifecycle. Your AI agent, let's say it needs to go talk to Salesforce first. The user has to authenticate.
Then there needs to be some level of token exchange happening between Salesforce and the user. Once you have the token, that token needs to be passed to your AI agent to continue doing the work. All of this involves storing tokens in a secure vault and making sure that only the agent that is supposed to access the token is able to access it, and reusing tokens as necessary. These tokens don't just expire. Sometimes you have longer duration tokens—one hour, two hours, whatever the job tokens that you generate. Depending on that duration, you don't have to write even a single piece of code. All that you do is reference a couple of decorator functions of Agent Core Identity, and you will be able to eliminate all the boilerplate code that is necessary for authentication and authorization.
This reduces the amount of development time required to take your agent to production. You're not writing any piece of code. You're just using built-in capabilities, and the beauty of it is that it already integrates with some of the identity providers, whether it be Okta, Cognito, Entra ID, and so on. Then there is the Agent Core Gateway, which provides a secure mechanism to integrate with your existing APIs and Lambda functions or even external services. It acts as a gateway between all your APIs and can convert those APIs into MCP-compatible tools. You don't have to build your own MCP servers in front of your APIs and Lambda functions and other AWS services. You can just leverage Bedrock Agent Core Gateway to help you with that MCP exposure so your agents are able to talk to that MCP through that protocol.
It also allows for a scenario where you have thousands of APIs across your enterprise and you're trying to convert them into MCP servers and tools. When the tool collection keeps growing and your agent is trying to find what tools are available, the context window for your agent keeps getting bloated up, and you don't want that. Where Gateway shines is it provides you with a semantic search capability where the agent will say, "I'm trying to send an email. Can you give me a tool associated with it?" and Gateway will pick and choose the specific tools that make sense for sending an email and expose only those tools to the agent so that your agent context window is not bloating up or going overblown.
Then there is the Agent Core Memory. If you want to have memory associated with your AI agents and you want to store and retrieve information across multiple sessions of your invocations of your AI agents, you can leverage memory. It has both short-term and long-term memory capabilities. Depending on what information you want to store, what type of AI agent you're building, and whether it requires some level of data retention for chat history or whatever your agent is doing, if it requires memory, the Agent Core Memory will help you with that.
And then finally, there is Agent Core Observability. This is a fully managed service which will help you to get a single pane of glass view into what exactly your AI agent is doing, what model it is calling, how many tokens are used, what are the inputs, what are the outputs, what tools it is calling or MCP servers that it is calling. So you get that auditability and observability right in the service itself. Some of the frameworks like Langchain and others already have those things baked in, and all the logs, metrics, and traces are OpenTelemetry compatible. So if you want to push those logs into CloudWatch or Datadog or Coralogix, Langfuse, or all those different tools, you can do so.
Agent-to-Agent Communication Patterns in Multi-Agent Systems
With that said, I will like to hand it over to Jiten. He'll cover the architecture patterns. Thank you, Sandeep, for covering the marketing automation challenges and the what and whys of Agent Core. Hi, I'm Jiten Dedhia. I'm a Senior Solutions Architect specializing in AI/ML and Gen AI. I cover the advertising and marketing industry across the US. Just with a show of hands, I'd like to know how many of you are from the advertising and marketing industry.
Just a few of you. The good thing is that whatever I'm going to cover, while it may have a focus on advertising and marketing, is going to be applicable across the board. So any of your use cases—financial services, life sciences, everything—is going to be applicable in the same way.
What I'm going to do is deep dive into how to use AgentCore and how to use a multi-agent system. I'll cover what the patterns are and what important things to consider, and we'll deep dive into those aspects. In particular, we'll start with agent-to-agent communication. We'll discuss what the options are, the pros and cons, then we'll go into the security aspect of it, and finally, multi-tenancy and observability.
Let's take a distributed communications system, right? A multi-agent system. The communication is like the nervous system of the brain of it. Here, what we have is multiple agents shown. These are marketing-related agents, but as I said, they could be any agent. As Sandeep has referenced, these agents could be built using any framework. One thing we're propagating is that you don't necessarily need to restrict your team to using one kind of framework. You could use LangGraph. Some teams could use LangGraph, others could use Strands, right? And it's a perfectly cohesive system. They can talk to each other very smoothly within AgentCore Runtime. So that's not a restriction. Now, if you want to standardize as an enterprise on one thing, that's fine, but it doesn't have to be, and that's something you need to realize with this.
Here you see a user who is going to typically interact with an application, maybe an existing application or maybe a new application. It could be chat or it could be a web app. What the application is going to do is communicate with one of the agents or multiple of these agents. In many scenarios, you will see that there is an orchestrator agent. So the application hands off to the orchestrator, and then the orchestrator is going to decide who it needs to talk to and when it should talk to each one of these agents.
Now, let's see how it can talk to these agents. We have a few options here. You can see there's Boto3, and then there is A2A. Let's say the application first calls using a Boto3 API and invokes the orchestrator agent. The orchestrator agent now has a choice. It can use Boto3 APIs to invoke the other agent, or it can use A2A communication. When would you use which one? As an enterprise, you need to standardize, but not necessarily. AWS Boto3 will give you built-in retry logic and error handling, some of those native things that you're used to. It will make your life simpler. When would you use A2A then? A2A is going to be an open source protocol. So if you want to standardize on open source, or what if you have agents living outside of AWS that you need to call? Maybe those cases are when you would typically go for an A2A kind of approach.
Now, imagine a world, and this probably is true for many of you who have been building agents for a while before AgentCore came out and before deploying it to AgentCore. What you would typically do is maybe you have an ECS container or EKS or Kubernetes where you would have written your LangGraph agent and deployed it there. Now those agents, legacy agents as I call them, are still there. You start building new agents and start deploying them in AgentCore. They need to talk to the old agent as well, and that's where probably you would use APIs to make the call to those agents. You'll write a tool within one of your agents or the orchestrator agent, and that tool may call out as an API to these other agents which are sitting in ECS or any other container.
Security Architecture: IAM Authentication, OAuth, and AgentCore Gateway
Then we take the next case. Here is where it gets a little bit more complex, but this is a real-life enterprise situation. There are multiple teams developing agents. There are reasons where you will need to use multiple AWS accounts. So maybe some of your agents are deployed in this account, others in that one, and another in a third account. When are those use cases? One, as I said, is that there are multiple teams developing them, so for ownership purposes and for cost allocation purposes, you need that. But then there are use cases where, let's say, you have a database which is sitting in account A.
Your agent needs to interact with that database. In that case, you may want your agent to be sitting in that account and talking to your database there. But then the other agents will need to talk to that agent, and that's where you will need to do communication between them. Another use case is shared services. Maybe there are some shared services type agents that are written as a common framework for your enterprise. Those agents could be living in a different account as well. So again, you'll need to communicate across AWS accounts in those cases.
Either way, the same thing applies whether it's within the same account or cross account. You still can use the Boto3 APIs or you can still use the A2A communication. So you don't have to worry about that and feel free to deploy it wherever you need to based on your use case. The next is an MCP server. Typically agents will require tools. Tools are going to be written as an MCP server. Now, how do you communicate to an MCP server? Using the MCP protocol, right? So that is available out of the box as well with AgentCore.
Those were the patterns to communicate with each other. Now what about security? If you ask your security team, the first thing they'll say is, how do you control the access to the agents? That's the thing, right? The authentication, the authorization, the fine grain control, all of that is needed for multi-agent orchestration. So what are the options that we have here? Basically, we have two choices again here to make between the two: IAM authentication and OAuth. OAuth requires an OAuth provider which can do the OAuth token generation for you. IAM is the native AWS authentication that you can use. Both of those options are available, and again you can mix and match. You don't have to choose one over the other. You can decide that some agents can use IAM authentication while others can use OAuth.
For an MCP server, again, the standard MCP uses OAuth, but we have modified that within AgentCore to use IAM as well. So if you deploy an MCP server within AWS AgentCore runtime, then you can use just IAM authentication as well. So then you can do native IAM authentication across the board if that's your preferred choice. With OAuth, you typically require an OAuth provider, either Cognito, Okta, or any other OAuth provider. If you're not familiar with it, what the process is, is basically your OAuth provider is set up for a certain agent. It can generate a token. That token comes when you call the agent. You'll pass that token along with it.
Now here's the beauty of it. What AgentCore does is it's going to intercept every single request. It's going to inspect it. It's going to see whether the token is present or not. Once the token is present, it's going to go to the provider, validate the token, and only then the first line of your code gets executed. So that's where the security comes in. That's relieving you of the security job. We are guaranteeing you that we will be intercepting the code. We'll be taking care of the authentication, and only valid requests are coming through. So that is a very important thing to note here.
There are two cases of OAuth tokens. One is a user-generated OAuth token, and another is machine-to-machine tokens. So there are two types of tokens that you could use potentially. When do you use which one? As we see, first, the user is calling the app, the app is calling this, so that you will use the user-generated token, so the user's identity is getting passed. The token generated by the user will get passed. You'll use that. But then maybe between the two agents or between an agent to an MCP, those are machine-to-machine things. You may want to just use a machine-to-machine token. Again, AgentCore identity supports both of those options, and you can use any of them.
Another thing sometimes you need is the user's identity needs to get propagated further to the MCP server, right? You have an MCP server which is a tool. Let's say it's accessing a database. You want to limit the access based on who the user was. And in that case, you will pass on the user's identity to the MCP server and make that decision.
If that is not the case, then using the machine-to-machine token makes your life simpler.
Sometimes you do need fine-grained control even further down. For example, you may have an agent that is doing multiple tasks, and then you need to have fine-grained control, which is something you may need to implement using code within your agentic code. So what we do is take the identity that is passed on and allow you with easy methods to extract that identity token from the request and from the context. You can get the user's identity, inspect it, and see whether this user is allowed to do operation A and B. If so, keep doing both operations. If the user is only allowed to do operation A, then just allow them to do operation A, and that is something you build in with your code.
Next, let's decide between IAM or OAuth and when to use which. The decision is simple. If you want to stay in the AWS ecosystem and you're using AWS across the board, you can keep using IAM and make your life simpler. Also, when you're starting up a new project, it's much simpler to just use IAM unless you have already used OAuth in other places. Then it becomes simpler setup-wise and you can get up and running very quickly.
On the other hand, OAuth is the way to go if you're already familiar with OAuth, if your organization has an OAuth provider already set up, or if you want to call across clouds. If you want to have cross-cloud compliance or if you want to use open source standards only, then OAuth is the way to go. There's a little bit more setup at the beginning, but then it's standard and you can use it across the board as well.
When we talk about multi-account, if you're using IAM, then you may need to use cross-account roles. That's the only difference, but other than that, it still stays the same. Finally, let's talk about the gateway. If you remember, Sandeep spoke about AgentCore Gateway. This is where if you have enterprise APIs, you don't need to start writing MCP servers and wrapping around them. You can just use AgentCore Gateway to do it. But as soon as you introduce this, how do we do security around it? How do we do authentication and authorization for AgentCore Gateway?
AgentCore Gateway supports OAuth as your mechanism. It exposes everything as an MCP server by itself, so it will allow you to do the following. Now an agent needs to call an API that is living in your enterprise. What you do is introduce the gateway. You supply an M2M token to the gateway token, and that agent gateway will verify and validate it. Only if it's valid will it pass it on or call your APIs. It's not going to call your APIs otherwise. So that's the built-in security that you get.
Then let's say your token is valid. Now how do you call your APIs? Your enterprise may have secure API keys for calling the APIs that you may already be using to authenticate with your APIs, or it could be nothing. Maybe it's unprotected APIs. All of those options can be supported by AgentCore Gateway. The gateway can be configured to say this API is protected by an API key while this other API is protected by OAuth, and you can mix and match. The key is you don't necessarily need to change your API code to do anything. Just configure the AgentCore Gateway to use your existing API. That makes your life simpler across the board and then expose it as an MCP server.
Multi-Tenancy, Observability, and Enterprise Implementation Patterns
So we saw how to communicate between multiple agents and then we saw how to secure them. Now let's get into multi-tenancy. This is a real requirement in many use cases. The multi-tenancy patterns here are very similar to if you were using a SaaS system. If you had originally a SaaS web-based application which was multi-tenant, the same patterns do apply. There's not much of a change. It's just how you look at it that is different. So option one is you can have multiple instances of the same agent deployed, one for each tenant. Again, you get complete isolation and complete boundary separation.
The second option is to have an endpoint deployed. When you deploy an agent to AgentCore, you can have multiple versions, and each version can have endpoints associated with them. You can say that endpoint A is for client A, endpoint B is for client B, and you can associate it that way. You have some degree of separation and you call it separately, but behind the scenes there is the same agent running. Or you can just have a single endpoint and do everything in the code within your agent, so you just have one instance of it running.
However, remember that even when there is one instance running, every time you invoke an agent, it's a completely different isolated session that AgentCore gives you. When user A calls an agent for tenant A and user B calls an agent for tenant B, they're using the same agent, but we completely isolate that session for you. So there is no need to go with option one or two. You can just have a single endpoint and handle it. If you do decide to do option one or two, that is also fair game. It's just additional management that you will have to go through. So that's the decision point: whether you want more separation with a little bit of additional management, or you can handle it within your code.
Another thing to remember about multi-tenancy is cost attribution. That's where the hotel logging that we were discussing, Opal telemetry, comes into play. You will need to log those for auditing purposes and cost allocation. Make sure that if you have a multi-tenant system, you log enough things around it and use hotel, and then you can use that anywhere. Finally, we come to observability. Observability is slightly different from what you need for any standard system. Agentic observability is slightly different here. You need to trace every single component and what's going through it. Why did the agent make this decision? How did it make that decision? Is it calling this system, this agent, and then this agent and this tool? You need all of that traceability to understand, to debug, and finally to audit it as well.
Out of the box, as we said, CloudWatch dashboards are provided. You can do session level tracing. If you look at one of the dashboards that we provide out of the box, you can click into a session and click into every single component to see how long it took, what was the request, and what was the response. All of that detail is available. But if you already have a third party observability system like Langfuse or Langsmith, feel free to use that. We have the hotel logs provided to you. Take that and feed it to your existing system. That's fair game as well. Both options are fully compatible with AgentCore.
Finally, we come to a futuristic agent, an agent observing all the other agents, and that's where you want to eventually get to with a final solution. This is the solution that Epsilon has implemented. All the patterns that we have discussed are reflected here. For communication, we standardized on OCTA. We are using OCTA for security, authentication, and authorization when communicating between the agents. We're using Boto3 here, so every agent is talking with each other using Boto3 APIs. We are deploying some agents in our account so that they can talk to these native databases, such as Neptune or any of these other databases that are private in a particular VPC. The same applies with the APIs on the right hand side. If you see the Epsilon Cloud messaging APIs, those are the enterprise APIs that are there, and those APIs are protected through or accessed via the gateway.
We standardized on OCTA, so MCP servers, gateway, agent to agent, everything is OCTA for security purposes, and then we access the APIs through that. If you want to take away a few things from the deep dive of this portion, one thing is don't build a single monolith agent, but split it into micro agents doing a small amount of work. What I've seen is that while you may have this kind of architecture in a picture, they build it into a single agent in LangGraph using a workflow or something like that. No, separate it out. Each can be scalable on its own. The second thing is security. Day one, day zero, you need to have that built in. Trust but verify should be your principle for security.
The third principle is multi-tenancy. Build it from day one. Make sure if you need multi-tenancy, it's built into it right from the start so that you don't have to go back and refactor later. I'm going to hand over to Prashant, who's going to show you all the magic of the agent core and how these agents are implemented.
Epsilon's Journey: Building Marketing Automation Agents with AgentCore
Thank you. I'll now go over agent core architectural patterns to address marketing automation challenges. Hello everyone, I'm Prashant Athota, Senior Vice President of Software Engineering at Epsilon Data Management. Can you hear me okay? Thank you. Today, I'll cover who we are, what we do, and the opportunities we identified to build agents within our product and ecosystem. I'll discuss the types of agents we built using AgentCore and Bedrock, show a small demo of those agents in action, which is probably the most interesting part of my presentation, and then cover the benefits and the challenges we faced while building those agents, along with lessons learned.
About Epsilon: Epsilon Data Management, or Epsilon in short, is a subsidiary of Publicis Group, the world's leading advertising and marketing company. Epsilon is a data-driven marketing company providing data technology and services to help our clients understand their customers and engage them across various communication channels. At Epsilon, we serve many industries including quick-serve, finance, retail, automotive, CPG, travel, and advertising, among others.
What makes Epsilon unique in this industry is customer identity. We carry over 200 million privacy-protected IDs linking address, name, and at least a single transaction. Ninety-five percent of them are verified, and 100 percent are deterministic. The data we host includes a national consumer database consisting of over 200 million consumers with self-reported over 1,000 attributes and over three trillion dollars in transaction spending. We also offer best-in-class solutions like digital media, retail media network, clean rooms, customer data platform, loyalty, and messaging, along with industry-leading support services.
At Epsilon, we believe in person-first marketing. Our one view uses our data and identity to provide a single comprehensive view of the universe of potential buyers for a marketer's products. With our one vision, we understand who to engage and when to engage, using real-time spend data. And with our one voice, we deliver an experience that is relevant and personalized for each individual at a point in time. These are the areas where we identified improvements and where we continue to use our agents.
Some of the improvement opportunities include effective personalized messaging, providing the targeted message at the right time for the right individual. By doing that, the results show higher engagement for that message. Another opportunity is dynamic customer journey orchestration to provide user experiences relevant to the user's transaction, which will in turn improve conversion rates.
Integrated systems for engagement allow us to bring the agents together so we can integrate all these products together. This gives us more efficient resource usage and more efficient operation of those products. Rapid innovation and automation through building our AI platform helped us to build agents at much faster speeds and to improve the time to market. All right, so driving the performance across the omnichannel approach.
Most of the brands would like to know their customers and create effective segments of the audience. To do that, we build agents. One is our 360 degree customer view. Audience AI allows marketers to understand their databases and complex relationships without writing complex SQL statements. They can literally write in plain English and get their segments done. Then we have defining the campaigns across the channels. The types of agents we build include a campaign brief agent, branding agent, creation agent, and campaign agent with AI assisted on-demand and scheduling. All these agents come together to build those campaigns. Once the campaign is generated and executed, we have agents measuring the performance and readjusting and optimizing the strategy of those campaigns, whether it is campaign analytics agents consuming the insights and so on.
Here is the timeline of our journey. When we started back in 2024 in Q1, we introduced Gen AI, which is more for audience creation, where this platform can build audience or segments using plain English text to SQL conversion. Then in Q3 we implemented subject line and image tagging on Amazon Bedrock with some of those agents. Beginning of this year, we have the first content creation or HTML agent. Then in Q3, we introduced AI into our SDLC, which definitely improved our productivity, getting almost 20 plus agents in a short time period and creating content, monitoring, and so on on our platforms. At present, we have 7+ teams working alongside AWS building our agentic platform. And I believe we're going to go for a small demo of all these agents coming together and playing along.
Live Demo: AI Agents Creating Personalized Email Campaigns at Scale
Every minute, brands send tens of thousands of emails, and the vast majority land in blind inboxes. With Epsilon, that ends. Engage with customers on a deeper level across all devices. Let me show you how Epsilon agents build campaigns.
Every strong brand starts with a clear message. Inside Epsilon Messaging, that message begins with a campaign brief built on decades of Epsilon's identity intelligence and marketing precision. A campaign doesn't start with chaos, it starts with structure. Epsilon Messaging instantly organizes intent into a clean, dependable workflow.
Choosing email here isn't selecting a format, it's activating one of the most trusted channels in modern marketing. With a few details, the platform aligns data, context, and audience signals behind the scenes, shaping a foundation the brand can build on confidently.
The editor opens not as a blank canvas, but as a guided space where the brand's voice naturally finds clarity and direction. Templates remove the burden of design guessing. Branding is applied with precision, so every message aligns with the brand's identity, instantly recognizable and always consistent.
Once the template is in place, the brand stands on solid ground. The styling is locked, the look is unified, and the message can now focus purely on meaning. A generic campaign fades fast, but when Epsilon's AI agents shape the audience, timing and message, your campaign hits with precision and drives real action. When the message fits the moment, the channel becomes effortless with Epsilon messaging.
Edits that once required long back and forth cycles become frictionless. Phrasing, flow, and rhythm adjust with a level of precision that keeps the message sharp. Suggestions add another layer—subtle, thoughtful, grounded in brand tone and clarity—the small refinements that elevate communication. Personalization becomes identity driven with no long segmentation work, no manual filtering, and relevance at scale, supported by the industry's most trusted data.
Dynamic elements bring the message to life, effortlessly adding motion, interaction, and depth without additional creative complexity. Tone controls bring discipline—professional when needed, optimistic when desired—a brand's voice clearly expressed the way it was intended. Readability and visual refinements enhance comprehension. Clean spacing, balanced design, and structured hierarchy guide the reader through the message effortlessly. With one click, the message sharpens into its final form.
Campaigns miss when they lack relevance. Epsilon's AI agents build relevance at scale, so your message lands and your customer responds. A full review confirms the message is aligned, structured, and ready. Device previews show consistency across screens and environments. Timing decisions shape performance. Epsilon messaging aligns delivery with patterns proven through identity intelligence and decades of engagement data.
The message moves from creation to execution, built with precision using AI agents, strengthened by identity, delivered through Epsilon messaging—the platform the world's strongest brands trust to communicate with clarity, scale and purpose. Now I'm going to go over the type of benefits we observed by building all these agents across our product suite.
Benefits, Lessons Learned, and Future Directions for Autonomous Agents
With agent prototyping, by using AgentCore, we definitely see reduced agent prototyping time, reducing the number of days it took from weeks to days. For campaign setup time, we observed 30% improvements, and as adoption goes up across campaign managers and analysts, I'm sure we're going to see more. Personalization capacity increased by 20%. Campaign creation times obviously saved many hours, but this is just the beginning. As we build more and more agents across other areas, we're definitely going to see a lot more improvements and savings.
Regarding lessons learned, we used to spend a lot of time manually creating workflows, but we switched our focus to automation and outcome-based journeys. The teams are changing their mindset to think more about the outcome than about what to do and how to do things. In any use cases, instead of worrying about what models are out there or what they're coming up with, you should focus more on what type of problems we're trying to solve, what outcomes we are looking for, and then look for the models that are more suited for your needs rather than going after models one by one.
Rapid prototyping definitely helps. Using platforms like AgentCore with their plug and play abstraction layer, planner, complaints and recommendations will help you build, experiment and deploy agents at speed. Using something like Amazon Q integrated with your SDLC improved us, especially in SDLC and AI pipelines.
Integrating them definitely helped us to speed up our iterations and fail fast. And then what's next? Autonomous everywhere. We would like to expand our agents, use of agents from marketing over to other areas like auto, data, and other platforms, loyalty and other areas. And building self-healing, self-monitoring type of agents. Also integrating the DLC pipelines with the operations so we can have autonomous agents working for maintaining the platform and stability and secure safety and compliance, etc.
And a unified intelligence layer. By using the agent gateway, bringing all the products together, so we have MCPs and the tools are all coming, working together, and the data is shared across the products using the same agents exposed to the same agents. And commercializing and acceleration, we'd like to take these architectural patterns and build agents that can be used by our clients or let the clients build their agents using our data, using our APIs and so on. And I also publish some of these agents in the marketplace so we can kind of monetize based on those.
All right, so here are some of the resources and learning resources that I think will be really helpful for you. These are some of the things that I refer to every day. There are great examples and sample code on GitHub as well, and there are free learning courses on AWS Skill Builder. So please take advantage of these and have as much hands-on practice as possible so that you can build those great AI agents and deploy on top of Bedrock AgentCore services.
And if you love us, you can clap at the end of the session, but first remember that you have to fill the survey. Especially look at my face. I'm very innocent. I'm a nice person. Please do fill the session survey and give us good ratings. Thank you so much guys. Take care.
; This article is entirely auto-generated using Amazon Bedrock.































































Top comments (0)