🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - How Baker Hughes is Driving Energy Innovation with AWS AI (AIM347)
In this video, Cheo Alvarez from Baker Hughes discusses how the company is leveraging agentic AI in partnership with AWS to address energy industry challenges, particularly the 165% anticipated growth in energy demand. He explains Baker Hughes' Leucipa platform, which integrates physics-based models, machine learning, and agentic AI to extract meaningful insights from massive data volumes (15 petabytes per drilling rig). The presentation details their architectural approach using orchestration agents and specialized domain agents, with a practical example of reservoir monitoring and electric submersible pump optimization. Key learnings emphasized include data quality, explainability, adaptability to heterogeneous customer environments, and the critical importance of human-in-the-loop validation for heavy industry applications. The company leverages AWS technologies and contributes to the open source Energy Agents project, aiming to scale digital transformation across global energy operations.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Energy Industry Challenges and the Transformational Promise of Agentic AI
All right, everyone, well, thank you very much for attending the first session of the day. My name is Cheo Alvarez. I'm with Baker Hughes, and this is how we're driving energy forward with AI technologies in partnership with AWS.
OK, so there's a couple of points that I want to talk about. First, setting the stage for the challenges that we're faced with in energy, I think in large part driven by the need for LLMs, agentic AI, and machine learning and the like. Then we're going to talk about how we're approaching that, specifically with our use and application of agentic AI and some lessons that we've learned along the way. We've been doing this for a long time. We've been introducing digital projects into the marketplace for many, many years. Of course, agentic AI is very new, but we're trying to weave those into every layer of our stack to really accelerate ourselves and our customers' ability to deploy these types of solutions really at scale at this point. So a little bit about under the hood, which we're happy to go into a lot more detail about kind of after the session, and then looking ahead, where do we see things going in the future and so forth.
OK, so the energy industry has for years been chasing kind of the same vision of the digital oil field. The world's energy demands though have only accelerated in the last few years, and this is a statistic that amazes me every time that I see it, but 165% anticipated growth in energy demand is really just remarkable, driven in large part due to hyperscaling needs and hyperscaling energy needs and the like. We have been on a journey in oil and gas for acquiring data and really appreciating the value of data at every step of our operation, from the reservoir to when we drill and complete a well, to how we measure that well, and then the ongoing operation of that well and downstream of that. You can see the statistics that we generate massive amounts of information. 15 petabytes of data coming off of a drilling rig. There's 1,800 drilling rigs operating at any one time across the world, and again, that's only the first part of the process of just drilling the well to begin with.
So there's data coming from everywhere. Extracting signal from that noise is always and forever will be the challenge, but the demand, the opportunity here is massive. If you've kind of followed the energy industry, you know, when I was first growing up, the story was always that the world is running out of oil. It's actually not technically the case. It's that the world is sort of running out of cheap oil. And so where we've risen over and over again to meet that challenge is introducing new physical processes to horizontally drill, to hydraulically fracture, to deepwater exploration and production. We will continue to do so with technology. It's just now that increasingly so, the technology will be more digital in nature, hence my talk.
So this is a quote from McKinsey about, and I think this would resonate with everybody here, how transformational agentic AI is going to be to every industry on the planet. It's going to transform every enterprise operations and, in my opinion, it's going to make work a lot more interesting as I can delegate my menial tasks to an agent so that I can then focus on something more creative, higher value added, and so forth. Where we see this impacting, this is again very high level, but 33% of enterprise software will include some shape of agentic AI by 2028, again driving energy demands. 15% of work decisions will be made by agentic AI in 2028. Over a billion agents are going to be created and deployed to varying degrees of quality, and this is where we see Baker Hughes, the quality of our agents, that's really going to be the competitive advantage that we see for Baker Hughes and our ability to help our energy customers going forward, because garbage in, garbage out. We have a lot of deep domain expertise that we need to apply and bring to our agents.
Leucipa's Journey: From Digital Oil Fields to Agentic Enterprise Software Architecture
So where we have been, specifically the technology area that I work in, it's called Leucipa. So Leucipa, let me talk a little bit about the digital technology program that I'm a part of, because we have been on this journey for years.
The journey that we're on right now with Agentic AI is the same journey that we've taken from day one with the digital oil fields. The beginning step of the process for us was we needed to be able to get access to data. Garbage in, garbage out. I've heard from every person I've talked to here, get access to and contextualize that data. Then add workflows, automate workflows, so work with our customers to understand what their processes are, what the data selections and technology choices that they made, the modeling tools that they use, and how they apply those to their business processes. I'm trying to be very high level here without using too much industry jargon, but I again think that that's very analogous across all industries.
We want to introduce automation, again, sort of respecting our customers' technology choices and business processes, so that we can eventually deliver some type of an outcome. We've been doing this since the early 2000s, first with physics-based, classical techniques. We've then layered in machine learning and AI type models, and now we're adding Agentic AI and weaving it through every piece of this stack as well.
Real world application of AI, of Agentic AI, so Agentic Enterprise Software. We're taking very much the same approach that I think everyone is. We're introducing orchestration, centralized orchestration. Where I think our real differentiator is, is again the quality of our individual agents. The orchestration agents are largely sort of generic, but where we differentiate is in our architecture and how we approach these things, and the quality of our individual agents themselves, which we've basically taken Leucipa capabilities, wrapped them up as individual agents, and then contextualized them.
In oil and gas, you have a reservoir that's two miles beneath you, extending for miles into the distance every which way. You have no means of actually measuring what is going on under the earth. So everything for us is an approximation of the truth. You have a near wellbore, which you can measure. You have some kind of mathematical ways of approximating what happens very far away from the reservoir in the well. We need agents that have some awareness of what the accuracy of their prediction is. You're always measuring the same value, but you approach it in different ways. You need a contextualized agent that understands the uncertainty and the quality of the model that it used to arrive at that approximation. This is where we're really spending a lot of our time and focus.
We have our foundational capabilities that we call Leucipa. This is our physics-based models, classical correlations that we apply. The oil industry has really followed along with the adoption of computing in general, so we have very simple plotting functions to then, as CPUs became very prevalent, we had reservoir simulation and nodal analysis and ways of iteratively calculating and simulating different things. Machine learning, heavy adoption of machine learning across our industry, and now again, we're wrapping all these up and packaging them as Agentic AI into the shape of which you can see here.
It's presented as a chatbot because this is the most comfortable user interface that you can present to an end user. But behind the scenes, what's happening is Lucy, Leucipa, Lucy is calling onto all of Leucipa's various agents that we provide and contextualizing them so that we can, again, reduce signal to noise and present the right context and call on the right agent at the right time to service the right, to surface the right recommendation to our end users. You can see here, everything builds on top of each other.
So Leucipa's ability to go out and connect to data, contextualize that data, work with our customer stack in whatever shape or tools they might have chosen, whether that's on premise, whether they've heavily leaned into SaaS and cloud technologies. We really need to be prepared to meet customers wherever they are, so both on premise and in the cloud, and all of this is with the intention of avoiding just nonsensical recommendations that are just adding more noise when we're meant to be extracting more signal from.
So this is where I think we start to differentiate is in our architectural approach.
And it's a forward-looking architecture rooted in our histories. The gravity of information shows on the left-hand side the energy operator, our customer, and on the right-hand side Baker Hughes and our digital platform Leucia. The energy operator has a wealth of information from all the different tools and applications that they use. They have their business processes that they flow through, and they have their particular operations that they apply these to. It's very heterogeneous in the sense that every field a customer operates is generally different.
When we go to talk to a customer and give them a demo of our software, they say, "Okay, well, test validation," which you would imagine is a very generalized process standardized across the industry. We start getting into the details of how they do it, and they say, "That looks nice, but it's not how we do things here." What we very quickly took away from that was that we need flexibility across every side of this. That's what this architecture affords us: the flexibility of adapting our agents very quickly to the customer's choices and data sources, the customer's choices and tools, and then standing up some agentic protocols in between so that each side can surface what the other is providing.
We then call orchestration agents on each side to both introduce agents and tools, as well as humans in the loop to validate the response of those agents and tools as they communicate back and forth. We really view agents as a partner, as a teammate here, not as a displacement of human capital, but really as an extension of human capital.
Real-World Implementation: Reservoir Monitoring Agents and Lessons Learned at Scale
To zoom in and get very practical in what one of these use cases looks like, we may have a customer who's chosen a particular flavor of reservoir monitoring agent. It could be coming from a physics-based tool or a machine learning-based tool, but that reservoir monitoring agent could be coming from simple surveillance of tags coming off of a wellhead. That reservoir monitoring agent should be tuned to specific conditions. In this case, we're describing a rising oil-water contact, which if we get too much water encroaching into a well, we're going to forever damage the well and produce far more water than we want to over the life of that well, and we will have effectively lost the well.
We'll have this reservoir monitoring agent running on the customer side or on the Baker Hughes side, but it's running in the background observing. When it sees some type of an event start to happen that it knows we can influence with another agent, it will call out to Baker Hughes' specialist agent. Baker Hughes really specializes in manufacture and design of pumps, of electric submersible pumps, so these are pumps that exist two miles under the ground at the bottom of the wellbore. We have various simulators that we can use to predict what is the optimal pump frequency or speed or liquid production that I'm going to produce through that pump so that I don't damage my reservoir. Typically, that would result in a decrease or slowing down of that pump.
Baker Hughes' agentic models make that recommendation of what we need to decrease to, and then this is where it starts to get very important for our industry. In the application of AI and agentic specifically to heavy industry, we really need to lean hard into having human in the loop validated recommendations by specialists who understand the physical principles behind what these agents are recommending. We might have our agent make a recommendation, but the quality of that recommendation is going to be wholly dictated by the model that it's called on and the quality of the data going into it. Our human in the loop validates the sanity checks for it before sending that then back to the customer. This is a service that we provide.
We send that recommendation back to the customer. We are then passing that, and the customer is then validating with their own human in the loop, the production engineer. Baker Hughes is not able in all cases to act on that recommendation itself. Some customers want us to provide that service, while others want to introduce their own human in the loop. It will generally be validated by their production engineer, an employee of theirs, who will approve our recommendation coming from our subject matter expert. Then they will push that recommendation out to an edge device.
We've really instrumented all of our wellheads these days, so we can push that recommendation out to an edge device without necessarily needing to send somebody out there to actually make that change physically themselves. And then the cycle just repeats. So then the reservoir monitoring agent will go back to observing, verify that the pump's change actually had the impact that we expected it to. And the process continues. And we have so many different agents and calculators and ways of approximating truth and quantifying uncertainty that this is a really sort of generalized process. This is one particular process, but it really scales well and can be generalized to so much of what it is that we do.
How do you break up a complex system? You break it up into small little parts. We have many agents solving each small part of this. And the orchestration, the higher level optimizer of optimizers or orchestration agent, is really just making everything work better together. So what we have learned through this, and it's been very much the same learnings that we've had since when we started doing this in AI and ML and agentic tools were not really widely prevalent. Data quality is everything, whether you're doing this with physics models, calculations, algebraic kind of correlations. The quality of the data is the foundation that you stand on.
The user experience, and as we introduce more and more what are perceived to be black boxes, the explainability of any recommendation or opportunity that we surface is absolutely key. We have users who are highly technical specialists who have spent years understanding like first principle physics of how this pump works, of how this reservoir behaves, of how to numerically, analytically solve these kinds of solutions. So we need to be able to very, very at a very low level explain how it is that we arrived at any particular recommendation. That was something that we were told from day one as we tested the agentic concept with people. Explainability to me is everything.
Adaptability. So across the, we use this term heterogeneity to describe our reservoirs. We have heterogeneous organizations as well. Every customer chooses different tools, different data stacks, different approaches, different business processes. Their fields are all different. We need to be prepared to meet them wherever they are on their own digital transformation journey. And we work with customers, not just in North American land obviously, but global customers around the world will all be at different stages of their digitalization process.
And then the last bit is governance and cost. So how to govern these things. HS&E again, heavy industry has disastrous consequences of the wrong recommendation being made because it's based on faulty data or stale models or the like. And really putting the guardrails in place based on Baker Hughes's expertise is really one of the areas that we think we bring the most value to our customers.
What makes it work? A lot of the familiar AWS technologies that we know and love and have heard announced and presented about today. There's an open source project that I want to highlight here called Energy Agents, which is something that AWS sponsors and has released. You can go check it out on GitHub. So it shows a lot of the same techniques that we apply ourselves internally. We build on these things, data management, of course, and then again, guardrails, which I've mentioned.
Looking ahead, again, this is where we look to is very much again anchored in our past of connecting to, contextualizing data, describing and working with customers to describe their workflows, automating manual processes powered by these agents, and then of course using that to really drive some type of an impactful outcome. Where we have been doing this manually with our implementation engineers, with our developers creating these types of tools. I'm speaking specifically about our customer facing Leucipa application today, but this is what's helping. We see agentic really is what's helping us to really massively adopt at scale the technologies that we have developed over the years, but are much harder to roll out at scale when knowing the level of tuning and customization that's needed to really bring this to an asset.
So weaving agentic into how we onboard our customers to then how we tailor our workflows and adapt them to a particular customer need and then turn, create and turn those over to customers so that they can then in turn use themselves. It's really going to be the game changer we believe in and what's going to allow us to really digitize this operation at scale. We think it's going to be powered by, we think it's really going to be possible with and maybe for the first time ever with agentic.
I think with that, I am very much at time and perfectly on time, so thank you very much. And we'll be happy to answer any questions. We'll be hovering around there somewhere, so thank you very much.
; This article is entirely auto-generated using Amazon Bedrock.














Top comments (0)