DEV Community

Cover image for AWS re:Invent 2025 - Agentic AI: The Next Frontier of Cloud Intelligence (AIM342)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - Agentic AI: The Next Frontier of Cloud Intelligence (AIM342)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

đź“– AWS re:Invent 2025 - Agentic AI: The Next Frontier of Cloud Intelligence (AIM342)

In this video, Atos executives Brian Ray, Justin Cook, and Mark Ross discuss agentic AI and productionizing AI solutions. They reference Gartner's hype cycle showing agents at peak hype while Gen AI enters the trough of disillusionment, noting MIT's finding that 95% of AI use cases don't reach production. The team explains Atos's focus on category 3 and 4 AI use cases requiring vertical/horizontal solutions and custom model training. They introduce Polaris AI platform and demonstrate a practical agentic AI application: Atos Digital Advisor for cloud advisory. This solution uses Amazon Bedrock Agent Core, AWS Step Functions, Lambda, and MCP servers to automatically assess customer environments across multiple domains, enabling advisors to have more meaningful conversations focused on future goals rather than current state documentation.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

The Current State of Agentic AI: Navigating the Hype Cycle and Adoption Challenges

Hello everyone, I am Brian Ray. I head up the data and AI business line for North America at Atos. Hello everyone, my name is Justin Cook. I'm the global CTO for the AWS Alliance at Atos. Hi, and I'm Mark Ross. I'm our chief architect for AWS in our engineering function. So we're going to demystify a bit about agentic AI and give you a little bit about the critical path forward for productionizing AI and give you a real honest landscape of what it looks like today.

Thumbnail 30

Thumbnail 40

First off, a little bit about where Atos is at. Atos has been around twenty some years. My group came in from a lot of different acquisitions, seventeen or so. Philippe Sal, our chairman of the board and CEO, said that AI is going to be extremely important to consulting and we need to make it a day to night business line. That represents two thousand employees, and we're on track to be ten thousand by 2028 for the firm overall by eighty thousand folks. It's a small portion of a big firm, but it's really a path forward for us and our customers.

Thumbnail 80

I'm an industry expert, and I also really rely a lot on what the industry is saying here. How many here have seen the Gartner hype curves before? I'm going to interpret this for you a little bit. It's really important to understand what they're trying to say here. Basically, agents, believe it or not, are at the tip of hype right now. I know you're shocked. Has anybody here heard of AI? Can everybody spell it? Good. It is a big topic and it's at the tip of the hype right now.

Model ops is coming down and they call this a trough of disillusionment. That's when people say the reality sets in. This is harder than we thought. We have data availability issues. We don't have the technology needed to do it. We need to go to the AWS conference and learn something. Many reasons it becomes the trough of disillusionment. You see model ops and Gen AI is already in that trough. The bubbles have burst to some degree with some of that.

Then if you see on the far side, that's the plateau of productivity. That's where we come back out of that trough and that's where we all want to get. We want this technology to be seamless, part of our daily business life, and we want it to make our lives better and easier, not challenging us every day. So that's where we want to get. What we're hearing is we need to make that data available. We need it to be governed. We need to have access to data in all sorts of places, SharePoint, other places like that. Protocols are coming up. You've heard MCP and other protocols making that more available.

Also, we need to be able to take some of those use cases and prioritize them based on ROI and risk and be able to do that better. MIT said ninety-five percent of those use cases people plan to do are not really getting into production. That makes a lot of sense if you take a lot of swings. Of course, we're all aiming to be that five percent to get your use cases in production.

Thumbnail 190

Another thing from Francis Kamusis and Gartner, and this is very helpful because this lays out AI, especially Gen AI, in five buckets on your far right. Those are your market makers, those are, of course, Anthropic and Google, Mistral, those are the ones that are doing the hundred million dollar investments in it. On your far left, target the category one, those are your very consumer-based, easy entry. My mom uses ChatGPT for recipes. It doesn't make her cooking better, but she uses it and your embedded technology as well. You're getting it embedded into your category two.

Category three is where you start using AI to solve specific vertical or horizontal use cases for a specific industry or a department, and then category four is where you take your data and you train a model or fine tune it or you gear it specifically to a problem. So category three and four, it gets more expensive as you get into those spaces. It's really a hockey stick curve of cost. Speaking of Atos, one of our big focuses is those category three and category four use cases. It requires lots of partnerships to do it, a lot of technology to do it. But it really has value, and we're seeing that value every day now.

Thumbnail 280

The other thing that you're hearing too is adoption problems. People are given these tools sometimes and there's a large fallout rate, and it depends on who you are. There's the everyday user. You should go, everybody wants to move from left to right in the slide, okay? Everybody does. Everyday users want to be able to get advanced with AI. Your children will be better at prompt engineering than you, okay? So everybody wants to use this technology. Lots of reasons for that.

Governance is critical—I always say that AI without governance is like playing tennis without a net. It doesn't really work and it's not that much fun. But everyone wants to move in that direction. If you're advanced or agentic, you want to move toward more advancement. Why do people fall out of that matrix? What happens when they fall out of it? Sometimes you'll see them use shadow AI, which is when you're using it anyway, or you'll see someone abandon it completely, saying we're not going to adopt this AI technology in my workflow because it just isn't working well enough for me. What's the solution for that? Partners and technology help, a need for consulting, and a need for community. Eventually there will be a community around Agentic AI, believe it or not. There will even be a currency someday around Agentic AI. Agents will have their own currency speaking to each other someday very soon. Think about that for a second.

Thumbnail 380

Atos's Strategic Approach: Building Outcome-Based Solutions and Go-to-Market Frameworks for Agentic AI

I'd like to talk for a minute about how we can help you get to the place where Brian wants you to go. At Atos, we're very focused on outcome-based solutions. Instead of just listing a bunch of services, we want you to understand where you're going before you get there. We do that by scaling out competencies we have—19 of them—and a lot of partner programs. What we really focus on is how we build a relationship with AWS to get the client there faster and more efficiently. The ultimate goal of our partnership with our clients and with AWS is to grow the relationships on all facets to make agentic evolution come faster.

Thumbnail 400

One thing I really want to discuss is how fast AI is evolving. We know from this conference that we are seeing frontier agents and different kinds of model distillation. One thing that's interesting is the different nuances we're seeing. It's going everywhere from browsing and scraping the web to executing multi-step tasks with strong governance. We see it across a bunch of different standards for security, but when is the actual right time to implement is key, and that is where Atos can help you. We know how to roadmap and carve out solutions to do a consistent strategy around agentic AI with all the new feature sets.

The goal is really around not just creating better agents but around better frameworks. We want to create a deployment and orchestration platform with tools and marketplaces that can build, scale, and manage agents in real time. This is not just about agility, latency, and scalability, but about your individual domain needs for the data that's attached to the AI. What does this all mean? We're going to start moving toward personifying AI. These agents can be self-adaptable, they can collaborate, they can sync, and it's all about using real-time enterprise data—your data. We're leveraging your specific data to feed this agent workflow.

As we start shifting more and more to agentic AI, we're going to accelerate more outcomes with minimal human intervention. That's really the key. I want you to ask yourselves the question: Is there a path for agentic AI to artificial general intelligence? What assumptions are baked into the goals that we give AI in your own individual organizations, and how do you see that growing? At Atos, we created a platform called Polaris AI. The ultimate goal is to give an extraction layer to how you build in the marketplace. As we grow agentic AI more and more, we're able to help you build out those professional services, and the objective is that we are always client-focused with what you need to do with your data and your AI solutions.

Let's talk about go-to-market around Agentic AI. You're going to hear a lot of services at this conference today and a lot about what exactly is going on and new possibilities, but how do you get to the point where you're actually using this with your customers or with your use cases specifically? The first one is: how do we bring the agents to our system? We're seeing a lot more industry-flavored super agents. As 2026 evolves and we go to market, there's a lot of AI-targeted super agents. We're seeing domain-specific applications—maybe it's finance, maybe it's tax, maybe it's accounting, whatever that is. Industry standards are baked horizontally and vertically across all organizations now.

Autonomy and multi-modal integration are coming out more and more. We're seeing different modalities—text, vision, whatever it is—and they're acting with more flexibility and handling those novel tasks we don't really need to do anymore. But last but not least, there's a lot of safety alignment and oversight. How do we put those guardrails and frameworks around detection, drifting, ensuring alignment, and items like run checks? It's tricky, but you need to make sure that you're prepared. That is the next point: how do we proactively prepare to move toward this go-to-market?

Thumbnail 510

At Atos, one thing that we really specify is industry-aligned solutions where we're amplifying the actual value of the AWS services. You can look at the service catalog and put them together, but how does this work to create an outcome? That's what we really design. We design the roadmap for you, and this is always about business outcomes. You can talk about AI innovation all day, but if you do not put that through a business lens, you're not able to prioritize those measurable outcomes and connect those to things like KPIs.

The third key area is differentiation. Understanding differentiating factors through an AI-first strategy, which you're going to see more and more over the next year, is critical. This is because we're not just bolting on anymore. We're building cloud native now, and that changes how we see AI. The key is to bring your vertical depth and your AI accelerators with that native implementation for faster time to value.

Last but not least, there's a really important one: proprietary IP and vertical AI solutions. How do we scale our AI solutions vertically and horizontally, perhaps across different geographies, different industries, and different regions? This is going to be really critical in the next year. You want to grow across all methodologies at once. How do we look through a lens of sovereign cloud expertise as well? That's something that we specialize in at Atos that we can help you with, as well as the data-first architecture.

This one is very interesting. We want to make sure that we're leveraging the data we have in our ecosystem for our models and for agentic AI. It's there. We need to leverage it and we need to make sure that we're using it correctly. Last but not least, how does this all fit into agentic AI orchestration? How does this build and how can we help you get there is the next step.

Atos Digital Advisor: A Real-World Implementation of Agentic AI for Cloud Advisory Services

At Atos, we're a large GSI with a cloud and modern infrastructure practice. One of the things we provide to customers is cloud advisory. Traditionally, cloud advisory has been done with one of our cloud advisors sitting down with customers and talking to them about their current environment, their goals, and helping to map out how they get there. It can be quite a time-consuming process involving discussions with multiple stakeholders on the customer side.

We've chosen cloud advisory as one of the use cases that we've taken forward with Agentic AI to modernize it with our Atos Digital Advisor. Using Agentic AI, we connect to the customer's environment and learn a lot of information about their environment. We connect to the web and learn a lot about the customer before the advisor sits down with the key stakeholders from the customer for the first time. This frees up our advisors initially to be able to spend more meaningful time with customers, having more meaningful conversations. It's great for the customer because they haven't got to sit down and explain their environment to us, and that time can be better spent talking about where they want to go and how we can get there with them.

Thumbnail 690

We look across a number of domains and assess the customer's maturity in those domains. That helps to have the conversation about what's a priority for you—is it security, is it sustainability, that sort of thing—and help to move them in the right direction. We then have agentic AI-powered reporting as well. It creates this maturity assessment automatically, comes up with recommendations for how the customer's maturity can improve. When the advisor sits down with the customer, this information is all available to start that meaningful conversation.

Thumbnail 820

Given that it's an AWS conference, this is all powered by AWS services. The front-end UI connects to Amazon API Gateway, which triggers a Lambda function, and then AWS Step Functions kick in to answer all of the questions within our advisory pack. These can be a range of technical questions about the customer's AWS environment or a bunch of non-technical questions about compliance regimes and things like that. The Step Functions are triggering Amazon Bedrock Agent Core, so we're using the Agentic AI services that we're all hearing a lot about this week within Bedrock.

We're storing information in Amazon S3 and caching information in DynamoDB. The Amazon Bedrock agent, or supervisor agent, kicks in. It takes the questions from the questionnaire and works out what to do with them. If it's a question about security and compliance, for example, it defers it to an agent that's doing web searching. So for example, if this was going to do an assessment for a healthcare customer here in the US and the question was around what compliance regimes that customer might need to comply with.

The agent goes out and learns from the web that the customer will likely need to be HIPAA compliant. If the question is more technical in nature, it gets delegated to another agent. For example, if someone asks how the customer separates their networking, that is a fairly generic question. The first thing that happens is it goes to an agent that talks via an MCP server to the AWS documentation. That generic question about how you segregate your networks becomes a much more AWS-specific question involving security groups, NAT, AWS Network Firewall, routing of transit gateways, and things like that.

That more detailed question then gets passed on to the next agent, which actually does querying via MCP of the customer's environment. So that much more complex question is handled by the agent and the MCP server. It goes off and interrogates the customer's environment, and then that information all comes back into the reporting that I talked about on the previous slide. The human agent then sits down with the customer and validates that information, so we have human in the loop to ensure accuracy of the information.

They talk through with the customer and discuss the relevant information, check it for accuracy, and have a much more meaningful conversation with customers about how we help them move to where they want to be, rather than spending all the time talking to them about where they currently are. We are over at booth 1661. We are Atos. If you are interested in a conversation about digital advisory, cloud modernization and migration, or data and AI, then please come and see us. I wish you a good day and a great conference.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)