<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kazuya</title>
    <description>The latest articles on DEV Community by Kazuya (@kazuya_dev).</description>
    <link>https://dev.to/kazuya_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kazuya_dev"/>
    <language>en</language>
    <item>
      <title>AWS re:Invent 2025 - Breaking AWS networks on purpose to build resilience (DEV343)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 05:00:30 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-breaking-aws-networks-on-purpose-to-build-resilience-dev343-1g2i</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-breaking-aws-networks-on-purpose-to-build-resilience-dev343-1g2i</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Breaking AWS networks on purpose to build resilience (DEV343)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Craig Johnson, Principal Solutions Architect at Forward Networks, demonstrates how to introduce controlled chaos into AWS networks to build resiliency. He explains using AWS Network Manager for network visualization and emphasizes the Reachability Analyzer as a critical tool for intent-based verification. Johnson shows how to establish baseline checks before changes, intentionally break network components (security groups, routes, Transit Gateway attachments), then use pre and post-change intent checks to verify network functionality. He advocates for automating these checks in CI/CD pipelines and conducting regular "chaos hours" as DR drills, providing a GitHub repository with Terraform code for implementation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/VdNvNFqdYHQ"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59cf67xhjhpxiheo19pt.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: Breaking Networks to Build Resilience
&lt;/h3&gt;

&lt;p&gt;Morning, morning. How's everyone going? So my name's Craig Johnson. I am a Principal Solutions Architect at Forward Networks. So quick show of hands, who here would consider themselves a network engineer, network guy, something like that? Okay. Who here has taken down a production network before? Awesome, awesome. Everyone has. So what I want to talk today about is introducing a little bit of chaos into your networks in an effort to build more resiliency, an idea that we've taken from on-prem data center networks. I want to apply the same kind of logic into our AWS networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=50" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh9n4z826e6ewpps8s7k.jpg" alt="Thumbnail 50" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So a little bit about me. Most of my career has actually been on the data center side. I spent many, many years at Cisco. I've got the CCIEs. They're all expired these days. Most of what I do is on the CLI. I'm going to apply a lot of those same principles here. Also, if you're interested in the Community Builder program, definitely reach me afterwards. It's a great program. It's probably the best thing I've done in my career in the last five years. And I also run a local networking group. If anyone's in the Dallas area, feel free to come talk to me about that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=80" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0d22hsa7ynb1poxdlop.jpg" alt="Thumbnail 80" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=90" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1hoozi9o1d4kcjzon3t.jpg" alt="Thumbnail 90" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's set a few ground rules. Like I said, I'm a CLI junkie. Most of what we're going to do is going to be  using the CLI and actually putting these ideas into practice.  Being an engineer, a network engineer, is hard. It's always the network. That's what the shirt says. It's always the network, and everyone always blames the network, and that's the mindset we're going to go into. It's natural that they blame the network. The network is the glue of everything inside your environment.&lt;/p&gt;

&lt;p&gt;So when something doesn't work, you know, I can't say it's the database. Can't say it's the application. The network is the thing that connects all of those things together, so that's the kind of mindset you have to take. You're always having to disprove it's the network, and I'm going to show you how to do a little bit of that to give you some of that confidence and to give you the way to say, hey, I definitely can show management and everyone else the network is not the problem here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=130" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjc2vabdlnfpuc12mfrw4.jpg" alt="Thumbnail 130" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1zqkz590pg9vk7vzbhg.jpg" alt="Thumbnail 150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like I mentioned, I'm going to show you some screenshots, and that'll be useful, but I'm going to show you mostly what we do either via pipelines, CLI, infrastructure as code. This is going to come from a repo that I have that I've created that'll actually, you'll be able to download. At the end, you'll have a QR code for it. You can see the code that I've done to create this and actually run some of the scripts that I've done to be able to actually  introduce some of this chaos into your environment.&lt;/p&gt;

&lt;p&gt;And of course, what I don't like about doing things in the cloud is a lot of my traditional network engineer tools don't really work anymore. You know, I used to be able to log into a router. I'd SSH into my Cisco router, run a few show commands, show some routes, do some pings. I can't really SSH into a Transit Gateway to look at its routes. I can go to that screen and kind of see it, but I can't really look at a routing table. There really isn't a routing table to look at. I don't have access to the same level of functionality. Now, some people will say, well, just turn on things like flow logs, and you get that. I'll explain why that's not a great idea in a second.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=190" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sklak4c7xlgukushmzk.jpg" alt="Thumbnail 190" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=210" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3c9rnj419mg1tk43rh1.jpg" alt="Thumbnail 210" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But as I said, all of the code that I'm going to use today is available at the end of this presentation. You'll be able to go to my repo, download it, play with it. It's an environment you can set up, but you can also take the concepts into your own environment. Okay, so today, the game plan, what we're going to do. I'm going to talk about breaking your network for fun and profit. This is going to be showing how I'm going  to introduce some of this controlled chaos into your environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=220" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnfr0ek6tcsqw0nas91d.jpg" alt="Thumbnail 220" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7nfl7ghsrq7609kfx0p.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Establishing Baselines with AWS Network Manager Visualization
&lt;/h3&gt;

&lt;p&gt;But first we have to gather a baseline. So before you make your change, you know, if  you've ever done any network changes, before you go to your change control board, you have to have a baseline of what your environment looks like and to verify everything works before that.  How do we do that and how do we prove that out? Do I actually trust what my applications are saying? And this is always the disconnect between the network team and the rest of the IT crowd, is that, well, the network sees the flows, but you don't always really know if the application is actually working as it is. I'm going to show you why we don't necessarily need to trust those, but we can get our own metrics on this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=260" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2i58khwr6s3l1my6ibt.jpg" alt="Thumbnail 260" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to introduce you to the idea of verifying my network based on intent. So when I do a change, I've got a MOP, and I'm like, okay, I'm going to make these route changes, these attachment changes, these changes to an ALB. I don't want you to think about it that way. I want you to think about what is the intent of my change, like what flows do I want? Where's my source, my destination, ports and protocols? Is that actually working? Whatever I change in the middle doesn't matter so much because oftentimes I'm not the only one making the change. I can verify that my change was done correctly. I implemented it the way I meant to. That doesn't mean the intent was right. So I want to give you a view on that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=300" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusqcpwyau3mig34hks8p.jpg" alt="Thumbnail 300" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're going to iterate on this a little bit, so I want to show you how we can take this and actually  iterate through your environment, so you're not just saying, okay, do this one time, do it for one flow.&lt;/p&gt;

&lt;p&gt;We can do this in a more automated way so that we're not doing this manually. Now, a lot of people still do the click-offs manually, but this will let you integrate it into your pipelines. My code has examples of that too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=320" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0pze120b9wyq0ls0iqy.jpg" alt="Thumbnail 320" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it turns out there's actually quite a few network engineer tools  in AWS that a lot of people don't know about, and it kind of makes me sad that people don't really understand some of these tools that we can use. In lieu of me SSHing into a device, I can use AWS Network Manager to get a good view of my entire environment. If you are in one or more VPCs, you've got transit gateways, you've got transit gateway connects, you've got direct connects, VPNs, lots of tools that I want to show you here. The main thrust I want to really focus on is using the Reachability Analyzer. This will help me look at the intent of my environment and help me figure out what's going on here.&lt;/p&gt;

&lt;p&gt;So it turns out you can actually have quite a bit here that honestly, I don't see many people using because you just kind of assume the network works. But when it doesn't, you kind of rush to some of these to figure things out. I want to give you an idea of how to use this today to figure these things out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=380" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyxda12j4vte7rx9f2h3.jpg" alt="Thumbnail 380" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the first thing you want to do, and I rarely see people do this, is get a good visualization of  your network. Now if this was a data center network, this would be me either opening up Visio or Draw.io, connecting my routers together. Maybe I've got an automated tool to do this, but generally it's just doing those things and connecting them. Way back in the day, getting one of those really big plotters and making my nice network map with all of my top of rack switches and everything together. I fortunately don't have to do that on AWS.&lt;/p&gt;

&lt;p&gt;I can use AWS Network Manager to visualize all of my AWS components, my transit gateways, my direct connects, my VPNs, my transit gateway connects just by registering in Network Manager. It doesn't cost you anything, and it's a way for you to pull up all of this environment together. So it's super easy to do. All you really need to do, and of course you can do this via the GUI, but like I said, CLI, you create a global network in Network Manager. You create the sites that you're going to do. So it's not even just your AWS components. Obviously, you're going to have direct connects, VPNs, things that are going to connect to external. I can create sites and actually show you where those things are connected to, to create this kind of picture of your environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=460" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq085365m27v87cls3ja9.jpg" alt="Thumbnail 460" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=470" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1hlqzap45pefm6e7pd9.jpg" alt="Thumbnail 470" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=480" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9mfm5oucexgrydtvitd.jpg" alt="Thumbnail 480" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=490" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuvy9qn38lg3e9nnikq0.jpg" alt="Thumbnail 490" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference the same ID. You could even put locations because this will give you a global map to show you where these things are at, so I can say, hey, here's all of my end to end network, not just AWS  but everything else with it. I can create devices, so obviously I don't manage those devices, but I can add like my Cisco routers, my on-prem firewalls into this,  same thing there, and then create links to show how they connect them and even put bandwidth on those links. Now, once I've done that, I can actually even do this across accounts  too. So many people don't have only one account. You can do this across multiple accounts as long as you give the right level of IAM access. You can create the entire environment no matter how big or small this  has to look like.&lt;/p&gt;

&lt;p&gt;So once I have created this, this creates a dynamic map of my environment. So I can show, and it also updates whether things are up or down. So you see the purple links are my transit gateway connection, the green links are my VPN connectivity or direct connection. Everything, including my VPCs, transit gateway connects, everything that I have in this environment is going to show up in this environment. As things get updated, as you add new VPC attachments, as you add firewalls, as you add anything to this, it's going to update. So you can always say, hey, I know before I make any of my changes, this is a visual representation of my network so that you can go to your change control board and say, the network is in a good state, at least from a high macro level, to show me how my entire global network is set up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=560" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn0zevfar52s0g3t91j8.jpg" alt="Thumbnail 560" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=570" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14td6bex1b4glaanjofa.jpg" alt="Thumbnail 570" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Intent-Based Verification Using Reachability Analyzer
&lt;/h3&gt;

&lt;p&gt;OK, now that you've done that, let's move on to actually doing a couple of guardrails here. So a lot of people say, well, I can just turn on flow logs, or I can use synthetic transactions. There's a lot of good observability tools to do that. It doesn't really help on a network side. They are super useful though. They give you a steady state of what the traffic  of my network looks like. So before I've made the change, I can get an idea of, OK, here are the traffic that's going on in my environment. Here's what it looks like. What does that mean for the rest of the environment though? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=580" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwaf4kvqtse8otu9c11l.jpg" alt="Thumbnail 580" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=600" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs6wmbrklk633jwhgl92.jpg" alt="Thumbnail 600" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if I have no idea, now as a network guy, I often don't know what my applications are doing. Things are deploying, spinning up, spinning down all the time. This gives me a really great way to  say, OK, here's the traffic in my environment. They are kind of pricey to leave on all the time. Turning on flow logs, if you have a lot of traffic that generates synthetic data, tells me source, destination, five tuple data for all that. Kind of expensive to leave on all the time, so not something you may not want to do if you don't need that level of data analysis. &lt;/p&gt;

&lt;p&gt;For outages, when we are actually trying to figure out what's wrong, they're not so useful. And the reason is&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=630" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdo4yfylfvjtdk4by1u7.jpg" alt="Thumbnail 630" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;these are things that are looking at the state of the network, looking at the traffic. If I've broken something and it's down, the traffic's not flowing. It doesn't tell me where something's going on. And it also has a lot of different places to look. I've got more than a few VPCs. I've got to look at flow logs in multiple places. There's some good AI tools that'll help you figure this out, but kind of not very useful here. But definitely have them when you need  them, turn them on. You've got a good monitor of what the network looks like if you have transactions down, Application Load Balancers, things like that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=660" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfb5vjpz2hvdacwm5pdl.jpg" alt="Thumbnail 660" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, so let's talk about intent checks. Intent checks are how we're going to build this model of the environment. They provide a way to ensure changes I make never break the network. The idea is that whenever I make a change, I want to make sure I exit that change window never having had a broken network. And I need a way to prove it not just to myself, but to all the other stakeholders.  So network changes are by far the largest source of outages that you will see in an environment. Now, network changes can encompass a lot of different things, but the reason that is, is back to my original point, the network touches everything, so they're going to cause the most outages. We need to provide a way to prevent that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=680" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgma45kzl83xdvfdixnz.jpg" alt="Thumbnail 680" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=690" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dv6rcu3agkg97ej0v2x.jpg" alt="Thumbnail 690" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=700" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8hapzfcbxf65v25j559.jpg" alt="Thumbnail 700" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I should know all my critical flows. This gives me  back to the previous slide. It gives me an idea of what my critical flows are, or at least I know what my applications care about. This source goes to this destination through  this load balancer, whatever it happens to be. Flow logs only check for actual traffic. They don't help me when things are down. I need a way to make sure the network is performing like I expect it  to pre and post change, and that's really the key, pre and post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=710" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxppvjx1otm2mi8kphwmz.jpg" alt="Thumbnail 710" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can I do that though without relying on a bunch of application verifiers  on the phone saying, yep, my app looks good, my app looks good. In any size environment that could be 100 people on a call checking for these things. I need to be able to do this myself and as a network guy, I don't necessarily have visibility in those applications. This is where Reachability Analyzer comes in. It is absolutely your best friend in making network changes, and I highly recommend you start using it.&lt;/p&gt;

&lt;p&gt;What this means is we're going to introduce a little bit of chaos into our environment, but we're going to use Reachability Analyzer to help me figure out if what I've done actually breaks the environment. Now, the simulated change we're going to make here, we can make an actual real change. My environment here is actually going to show more of a simulated level change. I want to make sure any change that I make doesn't affect the intent of the network though.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=770" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92yy7em8ayrn6ikfi06x.jpg" alt="Thumbnail 770" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now what kind of change could it be? Could be a messed up route. I could mess up a security group, Network ACL. Maybe I screwed up a Transit Gateway attachment. It doesn't really matter. You can use this for a macro view,  Transit Gateway Analyzer if I want to go across regions or even more of a micro view. Look at connectivity between instances in a VPC or a Lambda or whatever. So let's go ahead and break something. We're going to go ahead and create some chaos inside our network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=800" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y3r9qdqqlbgmyzkd10x.jpg" alt="Thumbnail 800" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hypsh0wwqnmrqixhj9l.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Controlled Chaos and Automated Pipeline Integration
&lt;/h3&gt;

&lt;p&gt;So there's lots of fun things we could do. I could block something in a security group, misconfigure an attachment somewhere, maybe create a blackhole route, change Application Load Balancer targets, or even just ruin my entire AWS Network Firewall policy. Lots of things I could do  that in just in the course of making changes is pretty common and super normal that you'll see in an environment. Now, before you start doing this, you might not want to do this on your production network. It's your career, you handle it how you want to, but you might want to do this in a twin of your environment or use my repo to actually practice some of this. Once you get a little better at this, you can  start to introduce some of this controlled chaos, but maybe not just yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=830" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpf0tqc607cgoceeswau.jpg" alt="Thumbnail 830" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=850" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosd1to8tp8jqfpyqjz1j.jpg" alt="Thumbnail 850" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's go ahead and run our preflight  intent checks. Okay, so this is Reachability Analyzer. Notice what I've done here is I have a series of checks that are here. These are all point in time checks though. So every time that I do this on a baseline, I do this before and after my change, I get a point in time that says, yes, the network is functioning correctly. And when I say the network is functioning correctly,  what that means is I've specified a source interface, source port, source IP address, destination, ports and protocols that I'm trying to get to, and it's going to tell me the exact path it takes through my AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=900" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favuz87cxnej1sansdqcu.jpg" alt="Thumbnail 900" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It goes from the instance I'm going from, the ENI that it's attached to, any security groups and Network ACLs you pass through, the routing table that it's configured with, all the way till it finally gets to its destination. This is what I mean by intent. I have source, destination, everything in between, and really all I care about is a yes or no. Does this pass? Does this fail? From the previous slide, you saw that all of those were successful. So I run these checks. I give this to my change review board and say, yes, every one of my intent checks are passing before I make the change. I make my change, and then I run them again afterwards and see what happens.  So let's do that. We'll go ahead and make our change that breaks something in the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=940" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h39glh6qpr4g6fa2f0i.jpg" alt="Thumbnail 940" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason this is useful is if I'm making a change and I don't know the effect of it, trying to get all of my applications to check could take a long time. I only have so much time in my maintenance window to do this though. Like I said, I can run my pings and trace routes and things like that, but that's only going to give me basic connectivity. That's not going to tell me, going through a firewall or going through a security group, am I actually hitting the right ports. And it's extremely limited in time. If I have a change window and I have four hours to do it,  if I mess up something and I don't necessarily know what it was or where I need to pinpoint, getting all my application people to tell me this app works, this app doesn't work, but it works in this region but not in this region, this is a way that you can use this to tell you right away what things work and what things don't work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz822is4ga2p1ezk4ddct.jpg" alt="Thumbnail 960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=970" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rtnimi2wp8eus5t6s4o.jpg" alt="Thumbnail 970" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, so I've made my bad  change. I've run my analyzer afterwards. You can see it shows up as not reachable. Now I can quickly say, okay, not reachable. Well, I need to troubleshoot what's going on with this.  Or if it's near the end of the change window, I back out of the change. Going inside, you can see exactly what's going on here. Fourth line down, it says you've got an attachment misconfigured on this particular VPC. That's the issue you need to go to. It may not be that simple to tell you exactly what the issue is, but you at least have that pass-fail to know that change that you made. Here is the pinpoint, and you know you're not guessing what the state of the network is. You have a good intent because you've run this before and after you've made the change. You've got actionable data. If the change goes bad, you know exactly what to do for next time, and you're not saying I'll back out the change, revert everything, and you're still blind because you don't exactly know what happened. You've got the forensic data to figure this out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1020" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0cau39l7v5peyjmwym0.jpg" alt="Thumbnail 1020" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1030" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8a378gk2iqvin3xjic2.jpg" alt="Thumbnail 1030" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1040" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6me1kqhpvygssli3ot69.jpg" alt="Thumbnail 1040" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1070" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufkjcmvubp02z423jp0v.jpg" alt="Thumbnail 1070" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, I want you to think about setting this up in a pipeline. These manual changes are  just fine, and you can use this for all sorts of things: attachments, transit gateways, VPC endpoints, even VPN connectivity all the way to your on-prem side.  Set this up in a pipeline. The idea is, the tagline I want you to walk away with is never exit a change with a broken network. This is great for point-in-time  stuff, but you can automate this a little bit too. The code that I have does this via Terraform. You can do this via CloudFormation, where I actually have a pre and post to say before I make the change, run my reachability checks, give me a pass, execute the change via the same pipeline, run those same checks again, gives me a pass-fail. If I get any fails, I can have a human intervention or I can optionally back out of the change entirely. You've created some chaos to give you an idea, but to give you a way to really narrow down to figure out what's going on. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1090" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvnf8yveq4jekrdogoi6.jpg" alt="Thumbnail 1090" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ymmuxufxc991c5b0jte.jpg" alt="Thumbnail 1100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1110" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsonlixu9sslfop5vcpwf.jpg" alt="Thumbnail 1110" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf20i7hifihazrqamt3p.jpg" alt="Thumbnail 1120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1130" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik7chxez0ws51roq6sji.jpg" alt="Thumbnail 1130" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1140" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0upd1p9mh0ic5zwudkjt.jpg" alt="Thumbnail 1140" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, what a lot of people say is my network is super complex. You've shown me just a few flows here. I have thousands of applications that are going on. I don't really know what they look like. How do I create all of those intent checks? That's where those observability tools I had  before come in. Those flow logs that you had, those synthetic transactions, this is where you can look at all of the top talkers on your network. You gather this data from your applications,  but I look at the top talkers and I say, okay, here is my critical application. Just because they're sending this much traffic gives me source, destination, IP address,  all things I can easily plug into Reachability Analyzer. Firewall rules either from a physical or virtual or AWS Network Firewall will tell you which ones are getting the most hits,  so you can see that data too. You can update these intent checks over time in an automated way. This is a super easy thing to do. In my code, I actually have these flow logs running  and SageMaker dashboards, so you can actually start to pull them out and query it, so you'll be able to see that kind of data too. It's a very good use case for AI  to keep this in a continually updated pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9k3yg03tqvtsuyop9id.jpg" alt="Thumbnail 1150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1170" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14e5ky1kxr7gd7ki0802.jpg" alt="Thumbnail 1170" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VdNvNFqdYHQ&amp;amp;t=1200" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d97t543oh1c5dwwcgh7.jpg" alt="Thumbnail 1200" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So that's the code. That's the link that I have there. It'll be a QR code pop up. But what I want you to think about,  schedule your first sort of chaos hour. This is something you should do, kind of like a DR drill, something that you do monthly, quarterly. Do something that you don't necessarily know what the change is. Different person breaking the network is one that's fixing it. Now my code does this too. It has a randomized break on there so you can practice this on your own.  But you want to blend these kinds of tools together. The observability tools are great for pulling the data in. This kind of thought process that people have, getting people to really think about, hey, when I make my change, I need to have these checks, not because they're going to take a little more time, sure, but it really saves me because I can prove out to people network is the problem, network is not the problem. Use the repo and the runbook to go for it.  And if you'd like, submit some PR improvements, and I'll be here in the theater expo. There's the QR code for it as well if you'd like to link to the repo for it. So feel free to come up, ask any questions, and I appreciate everyone.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - AI-powered co-sell: Unlock partner success with AWS Partner Central (PEX113)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 05:00:26 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-ai-powered-co-sell-unlock-partner-success-with-aws-partner-central-pex113-1ijc</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-ai-powered-co-sell-unlock-partner-success-with-aws-partner-central-pex113-1ijc</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - AI-powered co-sell: Unlock partner success with AWS Partner Central  (PEX113)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, AWS announces the launch of AWS Partner Central in the AWS Management Console, unifying co-selling and Marketplace features into one experience. The session highlights how 51% of AWS partners report higher revenue through partnership, and introduces key improvements including IAM-based user management, Partner Assistant with personalization, and unified solution creation. A comprehensive suite of APIs is unveiled, including Opportunity API, Leads API, Benefits API, and Solutions API, with MCP endpoints for building AI agents. Partners like IBM, Rackspace, and CrowdStrike demonstrate 50-72% growth using these integrations. The presentation details migration steps for existing partners and emphasizes how API-first approach eliminates operational burden, enabling automated workflows and real-time funding recommendations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Vj29vehr6wc"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qm7igqiaa0zhwgq3aq1.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=20" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2gzu1hgca79vhf5ugcy.jpg" alt="Thumbnail 20" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing AWS Partner Central in the AWS Management Console: A Unified Experience
&lt;/h3&gt;

&lt;p&gt;Hey everyone, how's everyone doing? Okay, let's get started. Before I start, I want to just talk about the value of partnering with AWS.  Over 51% of partners who partner with AWS have demonstrated higher revenue by working with AWS. Also, the number of deal sizes have increased, and they have reported higher close rates by working with AWS. Why is this important, and how do they do this? They do this because we offer tooling, we offer programs, and resources to help them grow along with their customers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=60" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9d66xlojlmvamlwagp8.jpg" alt="Thumbnail 60" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I also want to talk about the marketplace benefits. Many of our partners are also selling on Marketplace,  and they are global partners. They're listing in different regions, and they offer the flexibility for our customers to transact on Marketplace. And this is also super important for partners.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=80" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgyce1pajc3f1tivzebi.jpg" alt="Thumbnail 80" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now today, when our partner wants to  engage with AWS for co-selling and also wants to list on Marketplace as a seller, they have to go through a different journey, one using Partner Central and then separately they go through a Marketplace Management Portal to list their solutions and transact. Partners have told us consistently that they want a simplified experience to engage with AWS, an experience that would remove all their operational workload, especially for the sales team, and also help them scale faster with AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx74nv3oy11peywpf2i5h.jpg" alt="Thumbnail 120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that, I'm excited to announce the launch of AWS Partner Central  in the AWS Management Console. With this launch, Partner Central is now discoverable on the AWS Management Console. You can go to the console now, search for this icon on the screen, and you can start registering if you're a new partner. If you're an existing partner, we have a completely different workflow on how to migrate from your current Partner Central to the Partner Central in the console, and I'll walk you through those in a couple of slides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=160" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wrzpo1s64zvnzgyu583.jpg" alt="Thumbnail 160" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now why is this important? Like I said, we are unifying the experience,  so now you can access all your Partner Central features that you previously were using in Partner Central as well as all your Marketplace features in one experience. In addition to unification, we're also simplifying and personalizing your experience through a Partner Assistant. I'll talk more about that in the coming slides. Partner Assistant offers a lot of personalization and it knows who you are. And finally, we're introducing AWS Identity and Access Management to manage all your users for accessing Partner Central. You no longer have to be restricting users to the 20 user limit that we previously had.&lt;/p&gt;

&lt;p&gt;Now you can have as many users as you want, give them fine-grained permissions, and access all that from your existing identity provider. You don't have to remember a password. You can just use SSO to single sign on into the Partner Central experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=220" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mjo8gyuh63s5b0k2ew1.jpg" alt="Thumbnail 220" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtsrid0b3kbummh89pwk.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now  I talked about how we're unifying the experience. Now I want to quickly show you an example of how we're unifying at a feature level. So here's an example where a solution  gets created in the Marketplace catalog from the console experience. Previously you had to create solutions separately for co-selling and then create a separate solution for Marketplace transaction. Now you can do that in this unified experience where you can go to the console, list a solution, and then use the same solution for co-selling as well as listing on Marketplace. We also introduced a multi-product solution where you can have multiple listings under the same solution and you can go to market as a combined solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=270" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi0b3p3guqo754f3o8f1.jpg" alt="Thumbnail 270" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Partner Assistant Personalization and Early Adopter Success Stories
&lt;/h3&gt;

&lt;p&gt;Now let's talk about the Partner Assistant. So last year we launched Partner Assistant at re:Invent and it offered generic responses. Now we're introducing personalization on Partner Assistant. So Partner Assistant knows who you are and what you're looking for, and it gives you recommendations, summaries, and gives you guidance on specific workflows that you are currently in, and then it walks you through the experience. I showed you a solution experience before. Now let's talk about how our assistant can take that solution.&lt;/p&gt;

&lt;p&gt;The assistant can recommend specific specializations that you can sign up for, complete, and get those solutions listed on the marketplace. It can also give you summaries such as your existing opportunities. You can ask the assistant something like "summarize my leads and co-sell opportunities," and it will give you all your updates. Similarly, you can ask questions about funding, such as "give me the program requirements for a specific funding program," and the assistant will help you with all that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=330" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75pj0whxwep9tmfik484.jpg" alt="Thumbnail 330" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I talked about the Partner Central launch and the multiple features we launched. I want to take a minute to recognize all the launch partners who worked with us since the beginning from the design phase and helped us by providing feedback at the right time. These partners have already migrated to Partner Central in the console, and they're using the features and already benefiting from them. Especially regarding the sales operations I talked about, by being on the console they're already seeing the benefit of how the sales team can now go to one experience and manage all their marketplace, co-selling, and everything all in one place.&lt;/p&gt;

&lt;h3&gt;
  
  
  API-First Approach: Eliminating Operational Burden and Enabling Intelligent Automation
&lt;/h3&gt;

&lt;p&gt;With that, I'm going to hand it over to Pradith. He's going to provide more updates about what we just launched. Thanks, Raj. Quick question before I start: how many of you are AWS partners already? Great. Has anyone logged into the Partner Central experience, the new one? Nice. Okay, thank you, and thanks Raj for sharing the new unified console experience which is all under one hood under one single login. But that's only part of the story. There's a bigger part behind this which is powering it: the APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=410" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu65bsbq0hinl8xwbluwk.jpg" alt="Thumbnail 410" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=440" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faut9ibi101ghw920j8se.jpg" alt="Thumbnail 440" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Partners like you who are already existing partners, and some of the new partners, have told us that logging into the portal and doing the data entry is still an additional task for your sales team. It takes away meaningful time when they should actually be working with the customers and building the alliances, and this is the operational burden that we need to eliminate. So a couple of years back we started this journey of an API-first approach. Last year we launched Opportunity API,  which manages the end-to-end integration of an opportunity, and your sales team can just manage the opportunities in your own system, on whatever system you are using, and then it will automatically sync with our system and will drive the co-sell engagement.&lt;/p&gt;

&lt;p&gt;A lot of partners are already taking benefit. Partners like IBM and Quantics have already seen the increase in volume by more than 50%. There's also increased velocity because now there is no data entry. Systems can talk to each other. We can contextualize with the right data to make that progression a lot faster. And the insight of what it has done is it has also increased the visibility because now you can share more and the sales team can actually go and work to build the alliances. So not only is the data there to support them, but they have face time with the right partners and the customers to go and work on the customer deals. That has directly impacted their visibility, and a lot of partners have shared how they are able to get more AWS referrals after getting this integration done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=510" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qvo7hg2sybirwni7zo9.jpg" alt="Thumbnail 510" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the success of that Opportunity API, we are excited to announce a new suite of APIs with the Partner Central console, which covers the entire spectrum of partner engagement. Whether it's about building your profile, whether it's about defining your solutions that I just mentioned, you can directly sync it with your own catalog system so it always remains fresh and you can build the discovery as you want it, whether it's through marketplace or directly into AWS sellers. That whole single solution management can drive your discovery. We have Leads API also to cover some of the pre-sales part of it and ensuring that you get the right leads and qualify them to an opportunity much faster. And then the most important of all is the Benefits API. Benefits API covers all the funding programs that AWS has to offer, and you can get all of them consolidated under one single API, and you can find the right eligibility based on your profile as well as on the deal construct as to which particular funding program is most valuable for that particular deal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=580" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d20pagk376656c5l6pw.jpg" alt="Thumbnail 580" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I do want to go a little bit more into the Funding API and the Benefits API that can help you transform this whole funding experience. This is the most used workflow after the sales and Opportunity APIs, and this is why we have launched it a lot sooner. And what you can do, we have partners who have told us that they have a challenge understanding the right funding program. They have different permission sets.&lt;/p&gt;

&lt;p&gt;Permission sets go through different approval workflows by geographies, by customer segment, or by the strategic nature of their deal. You can configure all of this within your own workflow without thinking about the notification, without worrying about whether you have lost that notification within an email. You can configure it and personalize it at your own pace in your own system.&lt;/p&gt;

&lt;p&gt;Not only that, once you automate it, you can get additional data from your own systems and the collaboration tools that you are using to contextualize and personalize and create the whole agentic workflow behind it. A lot of our partners have already done this. For example, Rackspace has already used this API and the Selling API together with their own data to contextualize the whole funding workflow for their sales team. They are already taking a 72% year-over-year growth after doing this innovation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=670" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot3vxynlksbrdq2auk1e.jpg" alt="Thumbnail 670" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Talking about the agents and what these APIs brought to you, these APIs are also available through MCP endpoints.  What this does is it lays the foundation for you to now start rethinking the co-sell experience. Until recently, AWS co-selling has been primarily focused on person-to-person and very opportunity-specific interactions. We have been doing most of the support on the sales part of it, but pre-sales and post-sales have largely remained unattended from the system perspective.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=720" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zqoq9xma00gq7ji3bd2.jpg" alt="Thumbnail 720" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these APIs, now you can cover the end-to-end spectrum and redefine that whole experience. You can build intelligent and automated workflows that can give real-time funding recommendations. It can also give you the customer insights that are required to unlock that initial stage and the hurdle of a particular deal and get to the customer with a lot more readiness than just figuring out how and what you should talk about on that deal. &lt;/p&gt;

&lt;p&gt;These are the foundations that we are building and handing over to you for you to derive your own innovation. Partners are already using it, and as you build your own innovation and custom workflows, you can get a lot more clarity and insights from these APIs by contextualizing the data. You can also automate the workflows that will give the time back to your sales team, as some of the partners have already mentioned, just by using one API. Now you have a full suite of APIs to do this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=780" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxmwstee6c6n4uk9fte1.jpg" alt="Thumbnail 780" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want you all to take away this one message: we will continue to build this foundation and other capabilities. It's up to you to know how to pace it out, and we are here to help you drive this personalization and contextual workflow as you need. We'll talk a little bit more about it. Not only you, but we are also using the same APIs to automate our own field team. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ha1lpoapb3wch3a5qp.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are driving the same partner matching engine, which is finding out the right solutions to be recommended to a particular customer, the right partner to collaborate on a deal, and then giving the additional insights to our field team using a conversational agent to understand what that partner can bring to the table and have more intelligent discussions before they come to the table. We are also using these APIs and all the AI work for the scoring and prioritization so that our field team and your field team know what they have to prioritize for a given day. And then we automate the whole workflow, as I have mentioned earlier. &lt;/p&gt;

&lt;p&gt;The art of possible here is unlimited. As I mentioned, Rackspace, CrowdStrike, and IBM have already done some of this innovation using these APIs, and a lot of other launch partners are building new workflows on this. Imagine and consider this as a future state where sales join your team on day one. They really understand what the AWS deals are that they have to focus on. They get the real-time funding recommendation as well as the customer insights to create a deal construct and talk to their customers about how they can be more profitable for them.&lt;/p&gt;

&lt;p&gt;Not only just talk about it, but with a single click of a button, they can execute on that workflow. You can just provide a conversational consent about the funding to be requested or to be approved, and based on that, it will automatically work and provide you the whole deal construct and make your deal go faster. I would like to give you all the chance to build on this foundational capability and give us feedback on what capabilities you will need for you to innovate. With that, I will hand it over to Raj to take you to the home.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=900" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmidu1vgkqw3wqdcg2bc2.jpg" alt="Thumbnail 900" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration Steps, Support Resources, and Next Actions for Partners
&lt;/h3&gt;

&lt;p&gt;Thanks, Prerit. I want to just do a quick recap and then we'll talk about next steps after that.  So we talked about how we are unifying your experience on the AWS Management Console by launching Partner Central on the console, where you can access both co-selling and your marketplace features.&lt;/p&gt;

&lt;p&gt;We also talked about how the APIs can be used and integrated with your own CRM or any tool of choice, and we have SDKs available in AWS documentation. You can go read about this, let your developers and builders look at those APIs and SDKs to adopt the entire suite of APIs we launched. Finally, we talked about the MCP endpoint that you can use to build your agents and also use our existing assistants and agents that we are launching.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=950" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb4watd50ttdqzoqevgf.jpg" alt="Thumbnail 950" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that, I previously talked about how new partners can just log in and register.  I want to just talk about what about existing partners. So existing partners can log into your current Partner Central and walk through the various steps that we introduced, and it's available on the homepage, accessible for your alliance lead as well as your cloud administrator. The steps are very simple. The first step is to link your AWS account that you want to use for Partner Central, and many of you have already done this step either this year or last year.&lt;/p&gt;

&lt;p&gt;The next step is the important step where you identify who are those users that you want access in the console experience. Some of these users may already be using the marketplace features on the console, so now it's just a matter of adding the managed policies to those users. We are giving you an export of all the current users in Partner Central along with the roles so you can decide what should be the managed policy for those roles. Again, we have documentation for all of this. We're also providing a mapping between the existing roles and managed policies in AWS documentation.&lt;/p&gt;

&lt;p&gt;Finally, the last step is to schedule your migration. While you're here, you have a great opportunity to walk to the booth and complete all these prerequisites and schedule the migration. The migration starts from next week, and we're trying to do it off hours so it doesn't impact your business. So we do it over a weekend or off hours so that when you come back, you would have the new experience. We want to make sure you complete all these prerequisites because once you complete them, we will initiate the migration and you will not have access to the current experience because all the features that you are using today will be accessed from the new experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=1060" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmquf8d8hc7dc62y0pioj.jpg" alt="Thumbnail 1060" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we talked about the APIs, we talked about how to migrate. What if you need help? What if you need help  with the migration and also adopting these APIs? So one option is you can build it yourself. You can work with your engineering team and use the SDKs and build it. The other option is you hire a third-party integrator who will work with you through the migration process as well as help you with the API integrations, and we have incentives for both. We can work with you on how to accelerate both migration as well as the API adoption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=1100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xo7evsl6zmohqgkm660.jpg" alt="Thumbnail 1100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The migration is super important because all these APIs we have launched are only available after you migrate to the console experience. And here are some of the partners and third-party  integrators who will work with you and help you with the whole migration. Some of the partners here also offer API support as third-party integrators.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Vj29vehr6wc&amp;amp;t=1120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38rfd3j0rpvtr6jo5hr6.jpg" alt="Thumbnail 1120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I want to leave with this note. If you have more questions or if you want to read more, please scan the QR  code for the partner blog. Also, we are available at the booth, so please stop by. We'll help you with the migration right now. Thank you everyone.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - TDK SensEI built scalable IoT platform with AWS for sensor insights (NTA204)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:50:22 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-tdk-sensei-built-scalable-iot-platform-with-aws-for-sensor-insights-nta204-2j04</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-tdk-sensei-built-scalable-iot-platform-with-aws-for-sensor-insights-nta204-2j04</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - TDK SensEI built scalable IoT platform with AWS for sensor insights (NTA204)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Oren Waldman and Bob Roth from TDK SensEI demonstrate how their edge intelligence platform transforms industrial operations on AWS. The solution evolved from condition-based monitoring to agentic AI that predicts machine failures and automatically orchestrates responses—coordinating maintenance, verifying parts, and assigning technicians. Built on AWS IoT Core, Greengrass, SageMaker, and Bedrock, the architecture features security-first design with tenant isolation and outbound-only connectivity. The platform creates digital twins combining sensor data, maintenance logs, and production schedules, enabling AI agents to autonomously solve problems before failures occur. Now available on AWS Marketplace, it serves manufacturing, smart buildings, logistics, and energy sectors.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/OJkgqbmo0r8"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3peotot4pf5yrlfh421v.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=10" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjagqd9ecfz1cr00ieuq3.jpg" alt="Thumbnail 10" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: TDK SensEI's Edge Intelligence Platform Transforming Industrial Operations
&lt;/h3&gt;

&lt;p&gt;Good afternoon. I'm really excited to share with you today how TDK SensEI is transforming industrial operations using AWS.  For those of you who don't know, TDK SensEI is building something truly innovative, a next generation edge intelligence platform that fuses advanced sensors, AI, and machine learning to deliver real-time actionable insights for manufacturing and industrial environments.&lt;/p&gt;

&lt;p&gt;Now what makes this particularly compelling is the problem that they're trying to solve. In manufacturing, when something goes wrong, technicians typically spend 80% of their time just searching for the problem. TDK SensEI is changing that equation entirely. Their platform has evolved from basic condition-based monitoring to predictive maintenance and now to agentic AI, meaning it doesn't just predict when a machine will fail, it actually recommends solutions and orchestrates a response.&lt;/p&gt;

&lt;p&gt;Think about that, automatically coordinating maintenance schedules, verifying parts availability, and assigning qualified technicians all before a failure even occurs. And they've built this entire solution on AWS. They're leveraging IoT Core and Greengrass for edge computing, SageMaker for machine learning, and Bedrock for their agentic AI capabilities. The architecture is security first with comprehensive device provisioning and tenant isolation built in from day one.&lt;/p&gt;

&lt;p&gt;And the business impact is real. They're reducing unplanned downtime, optimizing maintenance costs, and creating digital twins of operational data that enable better decision making across manufacturing, smart buildings, logistics, and energy sectors. Now what's particularly exciting for all of us is that TDK SensEI is now available on the AWS Marketplace, making this solution accessible to your customers who are facing these exact challenges.&lt;/p&gt;

&lt;p&gt;So I'm Oren Waldman. I'm a senior solutions architect here. I've had the great privilege of closely working with TDK SensEI throughout this year, and it's been incredible to watch them go from development to active commercialization. And now it's my pleasure to introduce Bob Roth, CTO of TDK SensEI, who will walk you through their journey and show you how they built the platform. Bob, take it away. Thank you, Oren.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=130" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9mll9m550q00nyb2oc0.jpg" alt="Thumbnail 130" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=140" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvvps3yqpbk8q0fivq38.jpg" alt="Thumbnail 140" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Evolution from Condition-Based Monitoring to Agentic AI in Manufacturing
&lt;/h3&gt;

&lt;p&gt;All right. So I'll be talking a little bit to basically five key areas as we go through the presentation.  First, a little bit more of an overview and a little more depth. Oren did a nice job of introducing the company. I have a little more there. Talk a little bit about the evolution that we see going on with industrial automation from condition-based monitoring to predictive and now agentic. A little bit about the platform we've built, how the architecture works together, and then talk a little bit about where we're going and what we see as the evolution of this space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=170" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzf6xyiwad734u4o6q4q.jpg" alt="Thumbnail 170" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So first and foremost, who is TDK SensEI?  Most people when they hear TDK, they think of cassette tapes or audio tapes or other things, and so it's a bit of a change. We were formed just a little over a year ago with the mission to focus on bringing real-time intelligence and predictive and AI driven technologies to factory management and manufacturing solutions. As a division of TDK, we do have the privilege of having access to these industrial sensors and other components that we can bring into our solution, as well as a broad spectrum of TDK's internal factories that we can leverage for building our solutions as well.&lt;/p&gt;

&lt;p&gt;Overall, we have built a strong SaaS-based platform that has both cloud and on-premise capabilities, and we've got a team that has a global footprint across North America, Asia, Japan, and other parts of the world. Currently, we have four key market areas that we focus on: manufacturing, smart buildings, logistics and distribution centers, and the energy sector, and I'll talk a little bit more about how we apply to each of those soon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi475cazu6un3tnrkvfvc.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right, so what's going on in the area of machine maintenance and facility management? This is an ongoing and an evolving space. It continues to do so today. Initially, the focus was on condition-based monitoring. It was really about the detection of problems as they occur, leveraging machine learning to evaluate what's going on in real time. The primary value proposition of this is really relieving the burden of monitoring and decreasing the time to reaction when a problem does occur.&lt;/p&gt;

&lt;p&gt;However, that space is evolving more towards predictive maintenance, trying to detect problems before they occur so you can reduce and remove downtime, not just minimize it, but prevent it from happening in the first place. And then where we're focused today, which is where I like to think the magic is, is on the prescriptive or agentic, not just predicting the problem, but also analyzing and providing the solution to that problem and executing it for you, basically allowing you to have a strong solution that not just figures out what's wrong, but how to fix it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=300" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvli69stpihb03qxt5bra.jpg" alt="Thumbnail 300" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right. So a little more depth. This is a little bit deeper dive into the overall ecosystem that we've put together.&lt;/p&gt;

&lt;p&gt;Successfully automating and driving efficiency across factory management requires gathering significant amounts of data. Data is a key factor. We collect sensor data from machines, operations information like maintenance logs, and other types of information around the facilities such as production schedules. We combine all of this into a data twin that can then be operated on through AI agents and LLMs. This is an area that they're optimally suited for, and all of that is then realized for the customer in an overall factory solution, either through mobile or other types of edgeRX platforms with notifications that basically tell you what the system is doing, when, why, and how.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=360" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuw05tg9oim5spf4g03f.jpg" alt="Thumbnail 360" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we look through what our solution encompasses, this is really what we have deployed today.  We have a solution that has a gateway device connected to many sensors which are located within the factory and read information. All that's connected to the SaaS and the cloud where we have AWS services providing data storage and ML training models. I will go into quite a bit more detail on the architecture of what lies behind that in the near future here in this presentation. Overall, this is all hosted within various regions of AWS, so in different parts of the world we have a global platform for this, and then we're asynchronously communicating what's going on to the customer through the dashboard and other types of notification mechanisms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=410" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gdmfpofgcrb3ygnsols.jpg" alt="Thumbnail 410" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS-Based Architecture: Security-First Design with IoT Core, Greengrass, and Multi-Tenancy
&lt;/h3&gt;

&lt;p&gt;Now I'm going to take a pretty deep dive into what's behind this, what's the architecture, and what have we learned as we were building this platform. This diagram represents a high-level overview  of the overall set of services that we've built for this solution. This contains the components of the solution which are on the edge in the factory itself, as well as the components which live in the cloud. These are key things that we've built ourselves and areas where we've leveraged components from AWS and their services.&lt;/p&gt;

&lt;p&gt;Things like IoT Core and Greengrass provide a strong foundation for the IoT software integration, communications to the facilities, and provisioning solutions. We have significant amounts of time series data and information, so we have the AWS Timestream database that provides that. We leverage Iceberg for long-term data storage, as well as containers and Kubernetes and other types of solutions for the things that we've built. For the AI and ML components, we heavily leveraged Amazon SageMaker pipelines and Step Functions for building out the solutions, as well as some other components which aren't depicted here, things like Cognito for our identity management. Overall, the AWS foundation provides a very strong platform that we can build a custom solution on top of that blends the two worlds of the facility as well as the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=490" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc1sikfyzuda7wuqxsn0.jpg" alt="Thumbnail 490" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here I'll dive down a little bit into the boxed area. This really highlights the interface between the facility itself and what's in the cloud. I'm zooming in here on the IoT Core and Greengrass and MQTT. We've built a solution where all the communication activity from the facility out to the cloud is all done over a single protocol, the MQTT protocol with encrypted TLS. This is a very important part of this architecture because security and access to the facility in these manufacturing facilities is a massive concern for them. They really don't want to expose something that's making thousands of computer chips or wafers to data outside their firewall. This single solution is a very important part of this, and IoT Core and Greengrass provides for that provisioning, provides for the ability to communicate and manage this data in an efficient way and very effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=550" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwjbzdp5v5qxzx7u3wtj.jpg" alt="Thumbnail 550" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I was saying, security is a huge part of the solution that we've built. Factories are very concerned about this, the IT situation, and this is another place where we're able to build on top of what AWS provides. They have a very strong set of services, things like logging and monitoring with CloudWatch and EventBridge. These are tools that automatically allow us to capture what's occurring in the facility and to alert customers to it. They've embodied AI and ML into these tools as well with things like threat intelligence through their GuardDuty solutions. We're basically able to understand whether or not there are concerns for our customers from a security perspective by leveraging these solutions. The other thing is that many of these factories have ISO 9001 and other certifications. Compliance is a huge thing, making sure that the processes and data that they're using is being used in the correct way.&lt;/p&gt;

&lt;p&gt;The automated compliance solutions and automated solutions through AWS Config allow us to make sure that we are staying compliant to their ISO 9001 rules for all the things that we deploy inside their factory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=620" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2akjzpfo3l3cwyill587.jpg" alt="Thumbnail 620" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other thing that we've built out here is we have tens and are going to have tens of thousands of facilities, multiple facilities per customer. So the architecture that's been built here is a very strong federated and segmented approach for multi-tenancy. The only services that are exposed publicly are the ones that allow basically the devices to connect, but each of the customers lives in their own private subnet with their own private services. Each facility is isolated from each other. That means the ML data is not crossing from one client's training situation to another.&lt;/p&gt;

&lt;p&gt;So data isolation and the ability to keep all this training information separate is also another critical aspect of us building a platform in the cloud with AWS at scale. The other thing that's interesting is all of the services and all the connectivity from the facility is out to the cloud. There's never any inbound connectivity from the cloud to the facility. This reduces IT configuration. They don't have to open holes in firewalls. It's all outbound. All of the updates to firmware, all of that happens through that sort of phone home mechanism that we've built into the architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=690" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pbrzbkm3p4h2mid3sm6.jpg" alt="Thumbnail 690" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So here this is a quick example of one of the things I was speaking to. This is the device provisioning. We ship out sensors and gateways and all of those devices are equipped with a certificate that simply allows them to connect to what we call the lobby. They connect out to the lobby, they understand what device account, what customer account they're registered to, and then they receive new credentials that allow them to connect directly to that customer's federated tenancy area of the solution.&lt;/p&gt;

&lt;p&gt;So through this approach, we can ship out and preconfigure from our manufacturing all these devices, but they're all secure because they can only speak to this lobby area until they've received the appropriate credentials through this automated facility. This allows us to not have to preconfigure certain things in the manufacturing process. We can ship them out and then rapidly get them deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=750" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrjke36wkh9l27e3u2xc.jpg" alt="Thumbnail 750" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI and ML Capabilities: Automated Predictive Maintenance with Agent-Driven Solutions
&lt;/h3&gt;

&lt;p&gt;All right, so I'm going to shift gears a little bit here, and now I want to talk about what is probably my favorite part of this, which is  the AI and ML components, the area, the new stuff that we're focused on. We've really, in the same architecture space, we now have what I consider to be the magic, the hallmark of the TDK SensEI solution, which is really the power in AI and ML being applied to automate and predict and create a solution here.&lt;/p&gt;

&lt;p&gt;So the core foundation here highlighted, we leverage Amazon SageMaker and the Step Functions to automatically build and train ML models for our sensors. The customer, our facility leaders don't have to go out and deploy an ML model, do any kind of training. We actually deploy the sensors. They learn what normal is for operations in the facility, and then that gets built into our device and runs directly in the sensor. From that point on, it can detect anomalous behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0y5rfs9dzx167fp77xa.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this is a very automated solution. There's no manual configuration required for getting these sensors deployed, and that's all built on top of this rich set of AWS primitives that we've built out here. All right. Now, where's this all going? And this  is, I'll talk a little bit about the next phase of this, which is not just telling you you're going to have a problem, but solving the problem for you before it's occurred. That's really where agents come to bear, agents taking action on your behalf with what permissions you've given them based on all this rich set of data we have from our sensors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=840" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7ypju34iepyhd3rz3rm.jpg" alt="Thumbnail 840" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I'm going to walk through for a second here. This is sort of what would this look like if you're a factory manager and you have a failure that's upcoming. This is the experience of the EdgeRX platform that we've built out. So you as a factory manager would get a notification saying, hey, you've got a pump that has bearing wear. We've detected it's going to fail in the next two weeks. We already know what parts are required because our agent has read the manual for this pump and it knows how to do the repair.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=890" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wjc7tkqp4kfdn1pap97.jpg" alt="Thumbnail 890" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It knows what's in your ERP system. It knows what parts you have available, it knows what parts are necessary, and it knows what skills from the manufacturing installation process are required. So you get all that in a summary. As the factory manager, you can dive down and understand a little bit more depth. So you can say, okay, why.  What's going to happen? So this is here, now an example of we've gone through and we're looking at the various information that's coming from our sensors and detecting the fact that we're predicting this failure here because we're going to cross over the threshold of acceptable vibration on this particular pump.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=910" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zfdxuiwkti6juneciue.jpg" alt="Thumbnail 910" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=940" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvzw6stob8ksdb9nzmh5.jpg" alt="Thumbnail 940" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We know what parts are required to make the repair on this, and the system has looked at it and recognized that one of the parts is not in inventory. If you trust it and have it configured that way, we have an agent that could make the purchase for you and then track the availability of that part being shipped. Alternatively, it could inform your parts department that they need to make the purchase if you prefer to keep the financial aspect inside your organization. It will track when the part is available and then schedule the repair when it's ready to be done. It knows exactly  what skill set is required and is able to connect into your time management system that's available in the factory to determine what resources are available for scheduling. Effectively, what happens is this factory manager now has a solution to this problem without having had to do any actual work to address it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ql8y661pwcl6l8429c6.jpg" alt="Thumbnail 960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what's happening behind the scenes is a framework of agents that work together. We have an orchestration agent that's responsible for understanding the workflow here. For example, do you allow your agents to make purchases or not, or do we need to call out to a human to send them a notification that they need to make a purchase? All of the information and data that's available to the sensors and tools are all connected together. We use RAG, retrieval augmented generation solutions to do this, and we have a rich digital twin effectively of the architecture. That digital twin is not just what's happening in the factory, but all the supporting systems like ERP, personnel scheduling, and so on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=1010" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foulvem7ob8cxqli01exw.jpg" alt="Thumbnail 1010" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A little bit of a summary here.  Overall, we've built a platform leveraging AWS and all the things we've learned with the sensors and capabilities that TDK has in its rich 90-plus year history to produce something that has the ability to have low latency and do real-time processing. We built privacy and data security, which is such a critical aspect for these factories, from the get-go. It's built in from the very beginning, not something we added later. We have the ability to do resource optimization with the edge AI resources we've got here. We know when things might go down and are able to keep you from outages.&lt;/p&gt;

&lt;p&gt;All of these capabilities can be personalized directly to the role of the persona. Are you the factory manager? Are you actually the maintenance worker that needs to get access to what parts are available? All of this is very persona-driven in terms of interaction. The connectivity independence here is the fact that this solution continues to work even if you had a cloud outage. It's still receiving data, it's still processing, and we can still access the vibration data. We can still detect failures even if we would not have availability due to an outage. So the system is resilient to those types of aspects, and we're moving into this space away from just prediction to creating solutions as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=OJkgqbmo0r8&amp;amp;t=1090" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0an1erduxpv4qgfts4j.jpg" alt="Thumbnail 1090" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an overall summary, as I think Orren mentioned, we are available on the AWS Marketplace now. The solution here has a variety of different sensors that we can bring to bear. We have mobile as well as web-based dashboard interactions. Again, we're focused on some key areas in infrastructure, the manufacturing space, and energy. Smart buildings are almost like manufacturing plants these days. They have all the smart pumps and fans and all those things. It's almost a manufactory in and of itself, as well as logistics and distribution solutions, which is another really big area where we're focused today.&lt;/p&gt;

&lt;p&gt;That's it. Thank you for coming and hearing a little bit more about TDK SensEI and kind of where we're trying to take this. We're building a solution that we think is really the next generation of industrial automation and factory management. Thank you.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - How Baker Hughes is Driving Energy Innovation with AWS AI (AIM347)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:50:18 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-how-baker-hughes-is-driving-energy-innovation-with-aws-ai-aim347-hn1</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-how-baker-hughes-is-driving-energy-innovation-with-aws-ai-aim347-hn1</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - How Baker Hughes is Driving Energy Innovation with AWS AI (AIM347)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Cheo Alvarez from Baker Hughes discusses how the company is leveraging agentic AI in partnership with AWS to address energy industry challenges, particularly the 165% anticipated growth in energy demand. He explains Baker Hughes' Leucipa platform, which integrates physics-based models, machine learning, and agentic AI to extract meaningful insights from massive data volumes (15 petabytes per drilling rig). The presentation details their architectural approach using orchestration agents and specialized domain agents, with a practical example of reservoir monitoring and electric submersible pump optimization. Key learnings emphasized include data quality, explainability, adaptability to heterogeneous customer environments, and the critical importance of human-in-the-loop validation for heavy industry applications. The company leverages AWS technologies and contributes to the open source Energy Agents project, aiming to scale digital transformation across global energy operations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/VOA5pyBpAvk"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vffk6gux9y56r0ccky0.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Energy Industry Challenges and the Transformational Promise of Agentic AI
&lt;/h3&gt;

&lt;p&gt;All right, everyone, well, thank you very much for attending the first session of the day. My name is Cheo Alvarez. I'm with Baker Hughes, and this is how we're driving energy forward with AI technologies in partnership with AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=30" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrbwtce8a5m1vafncxem.jpg" alt="Thumbnail 30" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OK, so there's a couple of points that I want to talk about. First, setting the stage for the challenges that we're faced with in energy, I think in large part driven by the need for LLMs, agentic AI, and machine learning and the like. Then we're going to talk about how we're approaching that, specifically with our use and application of agentic AI and some lessons that we've learned along the way. We've been doing this for a long time. We've been introducing digital projects into the marketplace for many, many years. Of course, agentic AI is very new, but we're trying to weave those into every layer of our stack to really accelerate ourselves and our customers' ability to deploy these types of solutions really at scale at this point. So a little bit about under the hood, which we're happy to go into a lot more detail about kind of after the session, and then looking ahead, where do we see things going in the future and so forth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=90" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc3obp6ib70rznxhi7jz.jpg" alt="Thumbnail 90" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OK, so the energy industry has for years been chasing kind of the same vision of the digital oil field. The world's energy demands though have only accelerated in the last few years, and this is a statistic that amazes me every time that I see it, but 165% anticipated growth in energy demand is really just remarkable, driven in large part due to hyperscaling needs and hyperscaling energy needs and the like. We have been on a journey in oil and gas for acquiring data and really appreciating the value of data at every step of our operation, from the reservoir to when we drill and complete a well, to how we measure that well, and then the ongoing operation of that well and downstream of that. You can see the statistics that we generate massive amounts of information. 15 petabytes of data coming off of a drilling rig. There's 1,800 drilling rigs operating at any one time across the world, and again, that's only the first part of the process of just drilling the well to begin with.&lt;/p&gt;

&lt;p&gt;So there's data coming from everywhere. Extracting signal from that noise is always and forever will be the challenge, but the demand, the opportunity here is massive. If you've kind of followed the energy industry, you know, when I was first growing up, the story was always that the world is running out of oil. It's actually not technically the case. It's that the world is sort of running out of cheap oil. And so where we've risen over and over again to meet that challenge is introducing new physical processes to horizontally drill, to hydraulically fracture, to deepwater exploration and production. We will continue to do so with technology. It's just now that increasingly so, the technology will be more digital in nature, hence my talk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=210" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn9w115hpcmd29sux6f8.jpg" alt="Thumbnail 210" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=250" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuugnff0znl4qp4fgye6n.jpg" alt="Thumbnail 250" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this is a quote from McKinsey about, and I think this would resonate with everybody here, how transformational agentic AI is going to be to every industry on the planet. It's going to transform every enterprise operations and, in my opinion, it's going to make work a lot more interesting as I can delegate my menial tasks to an agent so that I can then focus on something more creative, higher value added, and so forth.  Where we see this impacting, this is again very high level, but 33% of enterprise software will include some shape of agentic AI by 2028, again driving energy demands. 15% of work decisions will be made by agentic AI in 2028. Over a billion agents are going to be created and deployed to varying degrees of quality, and this is where we see Baker Hughes, the quality of our agents, that's really going to be the competitive advantage that we see for Baker Hughes and our ability to help our energy customers going forward, because garbage in, garbage out. We have a lot of deep domain expertise that we need to apply and bring to our agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=300" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1hjekmv4nrz0px3w3w9.jpg" alt="Thumbnail 300" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Leucipa's Journey: From Digital Oil Fields to Agentic Enterprise Software Architecture
&lt;/h3&gt;

&lt;p&gt;So where we have been, specifically the technology area that I work in, it's called Leucipa. So Leucipa, let me talk a little bit about the digital technology program that I'm a part of, because we have been on this journey for years.&lt;/p&gt;

&lt;p&gt;The journey that we're on right now with Agentic AI is the same journey that we've taken from day one with the digital oil fields. The beginning step of the process for us was we needed to be able to get access to data. Garbage in, garbage out. I've heard from every person I've talked to here, get access to and contextualize that data. Then add workflows, automate workflows, so work with our customers to understand what their processes are, what the data selections and technology choices that they made, the modeling tools that they use, and how they apply those to their business processes. I'm trying to be very high level here without using too much industry jargon, but I again think that that's very analogous across all industries.&lt;/p&gt;

&lt;p&gt;We want to introduce automation, again, sort of respecting our customers' technology choices and business processes, so that we can eventually deliver some type of an outcome. We've been doing this since the early 2000s, first with physics-based, classical techniques. We've then layered in machine learning and AI type models, and now we're adding Agentic AI and weaving it through every piece of this stack as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=390" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2sp8j772oj470l6vvwl.jpg" alt="Thumbnail 390" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Real world application of AI, of Agentic AI, so Agentic Enterprise Software.  We're taking very much the same approach that I think everyone is. We're introducing orchestration, centralized orchestration. Where I think our real differentiator is, is again the quality of our individual agents. The orchestration agents are largely sort of generic, but where we differentiate is in our architecture and how we approach these things, and the quality of our individual agents themselves, which we've basically taken Leucipa capabilities, wrapped them up as individual agents, and then contextualized them.&lt;/p&gt;

&lt;p&gt;In oil and gas, you have a reservoir that's two miles beneath you, extending for miles into the distance every which way. You have no means of actually measuring what is going on under the earth. So everything for us is an approximation of the truth. You have a near wellbore, which you can measure. You have some kind of mathematical ways of approximating what happens very far away from the reservoir in the well. We need agents that have some awareness of what the accuracy of their prediction is. You're always measuring the same value, but you approach it in different ways. You need a contextualized agent that understands the uncertainty and the quality of the model that it used to arrive at that approximation. This is where we're really spending a lot of our time and focus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=480" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gj32fju6t3da3bqrre0.jpg" alt="Thumbnail 480" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have our foundational capabilities that we call Leucipa.  This is our physics-based models, classical correlations that we apply. The oil industry has really followed along with the adoption of computing in general, so we have very simple plotting functions to then, as CPUs became very prevalent, we had reservoir simulation and nodal analysis and ways of iteratively calculating and simulating different things. Machine learning, heavy adoption of machine learning across our industry, and now again, we're wrapping all these up and packaging them as Agentic AI into the shape of which you can see here.&lt;/p&gt;

&lt;p&gt;It's presented as a chatbot because this is the most comfortable user interface that you can present to an end user. But behind the scenes, what's happening is Lucy, Leucipa, Lucy is calling onto all of Leucipa's various agents that we provide and contextualizing them so that we can, again, reduce signal to noise and present the right context and call on the right agent at the right time to service the right, to surface the right recommendation to our end users. You can see here, everything builds on top of each other.&lt;/p&gt;

&lt;p&gt;So Leucipa's ability to go out and connect to data, contextualize that data, work with our customer stack in whatever shape or tools they might have chosen, whether that's on premise, whether they've heavily leaned into SaaS and cloud technologies. We really need to be prepared to meet customers wherever they are, so both on premise and in the cloud, and all of this is with the intention of avoiding just nonsensical recommendations that are just adding more noise when we're meant to be extracting more signal from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=600" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sy49hpeek0uf85j338n.jpg" alt="Thumbnail 600" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this is where I think we start to differentiate is in our architectural approach.&lt;/p&gt;

&lt;p&gt;And it's a forward-looking architecture rooted in our histories. The gravity of information shows on the left-hand side the energy operator, our customer, and on the right-hand side Baker Hughes and our digital platform Leucia. The energy operator has a wealth of information from all the different tools and applications that they use. They have their business processes that they flow through, and they have their particular operations that they apply these to. It's very heterogeneous in the sense that every field a customer operates is generally different.&lt;/p&gt;

&lt;p&gt;When we go to talk to a customer and give them a demo of our software, they say, "Okay, well, test validation," which you would imagine is a very generalized process standardized across the industry. We start getting into the details of how they do it, and they say, "That looks nice, but it's not how we do things here." What we very quickly took away from that was that we need flexibility across every side of this. That's what this architecture affords us: the flexibility of adapting our agents very quickly to the customer's choices and data sources, the customer's choices and tools, and then standing up some agentic protocols in between so that each side can surface what the other is providing.&lt;/p&gt;

&lt;p&gt;We then call orchestration agents on each side to both introduce agents and tools, as well as humans in the loop to validate the response of those agents and tools as they communicate back and forth. We really view agents as a partner, as a teammate here, not as a displacement of human capital, but really as an extension of human capital.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=720" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxrqtktqgojesvc0znoq.jpg" alt="Thumbnail 720" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-World Implementation: Reservoir Monitoring Agents and Lessons Learned at Scale
&lt;/h3&gt;

&lt;p&gt;To zoom in and get very practical in what one of these use cases looks like, we may have a customer who's chosen a particular flavor of reservoir monitoring agent. It could be coming from a physics-based tool or a machine learning-based tool, but that reservoir monitoring agent could be coming from simple surveillance of tags coming off of a wellhead. That reservoir monitoring agent should be tuned to specific conditions. In this case, we're describing a rising oil-water contact, which if we get too much water encroaching into a well, we're going to forever damage the well and produce far more water than we want to over the life of that well, and we will have effectively lost the well.&lt;/p&gt;

&lt;p&gt;We'll have this reservoir monitoring agent running on the customer side or on the Baker Hughes side, but it's running in the background observing. When it sees some type of an event start to happen that it knows we can influence with another agent, it will call out to Baker Hughes' specialist agent. Baker Hughes really specializes in manufacture and design of pumps, of electric submersible pumps, so these are pumps that exist two miles under the ground at the bottom of the wellbore. We have various simulators that we can use to predict what is the optimal pump frequency or speed or liquid production that I'm going to produce through that pump so that I don't damage my reservoir. Typically, that would result in a decrease or slowing down of that pump.&lt;/p&gt;

&lt;p&gt;Baker Hughes' agentic models make that recommendation of what we need to decrease to, and then this is where it starts to get very important for our industry. In the application of AI and agentic specifically to heavy industry, we really need to lean hard into having human in the loop validated recommendations by specialists who understand the physical principles behind what these agents are recommending. We might have our agent make a recommendation, but the quality of that recommendation is going to be wholly dictated by the model that it's called on and the quality of the data going into it. Our human in the loop validates the sanity checks for it before sending that then back to the customer. This is a service that we provide.&lt;/p&gt;

&lt;p&gt;We send that recommendation back to the customer. We are then passing that, and the customer is then validating with their own human in the loop, the production engineer. Baker Hughes is not able in all cases to act on that recommendation itself. Some customers want us to provide that service, while others want to introduce their own human in the loop. It will generally be validated by their production engineer, an employee of theirs, who will approve our recommendation coming from our subject matter expert. Then they will push that recommendation out to an edge device.&lt;/p&gt;

&lt;p&gt;We've really instrumented all of our wellheads these days, so we can push that recommendation out to an edge device without necessarily needing to send somebody out there to actually make that change physically themselves. And then the cycle just repeats. So then the reservoir monitoring agent will go back to observing, verify that the pump's change actually had the impact that we expected it to. And the process continues. And we have so many different agents and calculators and ways of approximating truth and quantifying uncertainty that this is a really sort of generalized process. This is one particular process, but it really scales well and can be generalized to so much of what it is that we do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=970" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8svl5iumt9e5ob581ff6.jpg" alt="Thumbnail 970" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How do you break up a complex system? You break it up into small little parts. We have many agents solving each small part of this. And the orchestration, the higher level optimizer of optimizers or orchestration agent, is really just making everything work better together.  So what we have learned through this, and it's been very much the same learnings that we've had since when we started doing this in AI and ML and agentic tools were not really widely prevalent. Data quality is everything, whether you're doing this with physics models, calculations, algebraic kind of correlations. The quality of the data is the foundation that you stand on.&lt;/p&gt;

&lt;p&gt;The user experience, and as we introduce more and more what are perceived to be black boxes, the explainability of any recommendation or opportunity that we surface is absolutely key. We have users who are highly technical specialists who have spent years understanding like first principle physics of how this pump works, of how this reservoir behaves, of how to numerically, analytically solve these kinds of solutions. So we need to be able to very, very at a very low level explain how it is that we arrived at any particular recommendation. That was something that we were told from day one as we tested the agentic concept with people. Explainability to me is everything.&lt;/p&gt;

&lt;p&gt;Adaptability. So across the, we use this term heterogeneity to describe our reservoirs. We have heterogeneous organizations as well. Every customer chooses different tools, different data stacks, different approaches, different business processes. Their fields are all different. We need to be prepared to meet them wherever they are on their own digital transformation journey. And we work with customers, not just in North American land obviously, but global customers around the world will all be at different stages of their digitalization process.&lt;/p&gt;

&lt;p&gt;And then the last bit is governance and cost. So how to govern these things. HS&amp;amp;E again, heavy industry has disastrous consequences of the wrong recommendation being made because it's based on faulty data or stale models or the like. And really putting the guardrails in place based on Baker Hughes's expertise is really one of the areas that we think we bring the most value to our customers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=1110" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdykouklvvu6bg32zi4j.jpg" alt="Thumbnail 1110" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What makes it work? A lot of the familiar AWS technologies that we know and love and have heard announced and presented about today. There's an open source project that I want to highlight here called Energy Agents, which is something that AWS sponsors and has released. You can go check it out on GitHub. So it shows a lot of the same techniques that we apply ourselves internally. We build on these things, data management, of course, and then again, guardrails, which I've mentioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=1150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fb8xs6outl28vgacoo.jpg" alt="Thumbnail 1150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking ahead, again, this is where we look to is very much again anchored in our past of connecting to, contextualizing data, describing and working with customers to describe their workflows, automating manual processes powered by these agents, and then of course using that to really drive some type of an impactful outcome. Where we have been doing this manually with our implementation engineers, with our developers creating these types of tools. I'm speaking specifically about our customer facing Leucipa application today, but this is what's helping. We see agentic really is what's helping us to really massively adopt at scale the technologies that we have developed over the years, but are much harder to roll out at scale when knowing the level of tuning and customization that's needed to really bring this to an asset.&lt;/p&gt;

&lt;p&gt;So weaving agentic into how we onboard our customers to then how we tailor our workflows and adapt them to a particular customer need and then turn, create and turn those over to customers so that they can then in turn use themselves. It's really going to be the game changer we believe in and what's going to allow us to really digitize this operation at scale. We think it's going to be powered by, we think it's really going to be possible with and maybe for the first time ever with agentic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=VOA5pyBpAvk&amp;amp;t=1240" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fest58qvd4ryt4rl1tso7.jpg" alt="Thumbnail 1240" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think with that, I am very much at time and perfectly on time, so thank you very much. And we'll be happy to answer any questions. We'll be hovering around there somewhere, so thank you very much.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Developing AI Solutions: What Every Developer Should Know (TNC207)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:40:20 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-developing-ai-solutions-what-every-developer-should-know-tnc207-46ok</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-developing-ai-solutions-what-every-developer-should-know-tnc207-46ok</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Developing AI Solutions: What Every Developer Should Know (TNC207)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Satabdi, a Senior Solutions Architect at AWS, explores essential skills for generative AI developers. She addresses the AI readiness gap where organizations struggle to find qualified talent despite high demand. The presentation covers five core competencies: prompt engineering, Retrieval Augmented Generation (RAG), agentic systems, fine-tuning, and retraining. She emphasizes building AI applications on ethical foundations including fairness, explainability, privacy, security, controllability, veracity, robustness, governance, and transparency. The session highlights AWS certifications like AI Practitioner, Associate Data Engineer, and the new beta Generative AI Developer certification as trust signals for verifiable skills. Actionable steps include accessing free resources on AWS Skill Builder and pursuing AWS certifications to advance generative AI careers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/obKSKTQgCRA"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mz0g4c0pyowlx4yvav5.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: Building Essential Skills for Generative AI Developers
&lt;/h3&gt;

&lt;p&gt;Hello everyone, I'm Satabdi. I'm a Senior Solutions Architect with AWS for the last four years now, and today we are going to explore the skills that make you exceptional as a generative AI developer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=20" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrqyzyoyqqh3nnj8w1zt.jpg" alt="Thumbnail 20" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright, so let's take a look at the agenda. First, we will see the current state of AI readiness, where organizations are, what skills are missing, and why the demand for generative AI developers is growing so rapidly. Then we will look at the essential competencies every generative AI developer should have, and that includes the technical competencies like model integration and prompt engineering, and also the applied skills like responsible AI and solution design. Then you will see how AWS Training and Certification can help you build and validate those skills that make you stand apart in this area. And finally, we will see the small actionable steps that you can take today to build and advance your generative AI career.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu468xazhgsfyl7xd3k1.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The AI Readiness Gap: Talent Shortage in a Rapidly Evolving Market
&lt;/h3&gt;

&lt;p&gt;Okay, so what we are seeing in the market right now is not a lack of interest in AI, but it's the readiness gap. Almost all the companies want to add AI into their solutions, but they are not finding the talent of the people who can make that happen for them. AI skills and technology are not the challenge here, it's the talent. AI tools are growing faster than the workforce skill set, creating a real friction between ambition and execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3aypefdzannlfiacz7q.jpg" alt="Thumbnail 100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But for developers, it's a real opportunity. For those who can blend their AI knowledge with hands-on skills, they can make real transformation in their organization. And AI is not only creating new jobs and skills, but it is also transforming the existing jobs. So what skill set made somebody exceptional five years ago will not be enough five years from now. That's why AI literacy, the way you can use your AI skills with hands-on knowledge and build applications, is becoming a critical baseline for career growth and development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyp1e2xwinm3uyqcy6w6.jpg" alt="Thumbnail 150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=160" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2v4tw8dtzkx0tmgxfq4.jpg" alt="Thumbnail 160" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Five Core Technical Competencies: From Prompt Engineering to Model Retraining
&lt;/h3&gt;

&lt;p&gt;Now we will see, oh sorry, I'm pressing the wrong way.  Now we'll see the core competencies that every generative AI developer should build, and it starts with prompt engineering. It's the way you guide your LLM model with proper context, examples, and output cues. Next is Retrieval Augmented Generation, also known as RAG, and this is how you ground the model to your data so that it provides accurate and trustworthy responses. And next come the agentic systems. This is where LLMs stop responding and start gathering information and taking actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=200" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv8fq7z9lha0od4zeha6.jpg" alt="Thumbnail 200" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have the three core competencies, you move on to fine-tuning, where you customize a model with your own data so that it speaks to your company's domain. And finally, it's retraining, where you build or adapt a model from the ground up for specialized use cases. And together, these five skills transform generative AI from the tools you use to the capabilities you own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxwqscwlzgvrk7lx9r3i.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Responsibly: Ethical Foundations and Operational Principles for Generative AI
&lt;/h3&gt;

&lt;p&gt;So now we have seen the core competencies, and they define the generative AI applications you can build. But when we talk about building generative AI applications here at AWS, they are built on some strong ethical foundations, and next we will see how we can build generative AI applications ethically, responsibly, and at scale. And it starts with fairness, so that the models and applications we build don't disadvantage any group. Next is explainability that lets the team evaluate and understand why the model reached a particular conclusion. Then comes privacy and security. We put high emphasis on that, protecting the data, protecting the model data, and the people behind these models. And then finally comes privacy that allows unintended system behavior and hurtful system usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=290" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizjkdeupevw4r83jfu52.jpg" alt="Thumbnail 290" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This now, this is the part of the operational side where controllability allows you to guide the AI model. The veracity and robustness make sure the model output is correct even under stress.&lt;/p&gt;

&lt;p&gt;Governance puts accountability in every step of the AI lifecycle, and transparency enables stakeholders to make confident choices with conscious decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=320" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kekkxfdawnhso90q5in.jpg" alt="Thumbnail 320" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Certifications: Validating Skills Through Trust Signals
&lt;/h3&gt;

&lt;p&gt;Now that we have seen what developers are learning, developing the skills, and responsibly building the AI context side, we will see what organizations are looking for. Employers are looking for trust signals that somebody has real verifiable skills they can use. It's nearly impossible to verify every skill somebody has during promotion or hiring, but certification can close the gap. Certification sets a clear benchmark that somebody has verifiable skills, and that is why we are seeing certifications showing up in more and more job requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=370" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyreaja39ubg5bc25dat.jpg" alt="Thumbnail 370" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the key AWS certifications that we have available at this moment. We start with the foundational AI Practitioner, then the Associate Data Engineer, and we also introduced a new certification, Generative AI Developer, which is in beta mode right now. This is the progression you can have with your AI skills. We also had a 50% off voucher in your re:Invent portal when you signed up for re:Invent, and you can use that voucher to take any of the certifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=410" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem5h9fq627sq2qe1bu9h.jpg" alt="Thumbnail 410" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is feedback from one of our practitioners who took the exam, and I will give you some time to read the feedback. This is exactly the kind of feedback we had in mind when we built the certification suite we have for generative AI builders.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=440" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ghdzqsajb49bwtgiqpp.jpg" alt="Thumbnail 440" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking Action: Free Resources and Next Steps on AWS Skill Builder
&lt;/h3&gt;

&lt;p&gt;Now moving on to the next slide, the actionable steps. Generative AI is already here, and we have to build the skills to build the future. This is the QR code that you can scan. It will take you to AWS Skill Builder, where there are some free resources available for you this month. You can take the first AWS certifications, Cloud Practitioner and AI Practitioner. This also has labs and all the skills and resources that we talked about in this presentation that you can take at your own time to build the skills you need to build the future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=obKSKTQgCRA&amp;amp;t=480" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fby6so0514ow3x71tqj4b.jpg" alt="Thumbnail 480" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Build an Agentic SaaS App in 5 Steps: From Idea to Revenue (AIM104)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:40:18 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-build-an-agentic-saas-app-in-5-steps-from-idea-to-revenue-aim104-165j</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-build-an-agentic-saas-app-in-5-steps-from-idea-to-revenue-aim104-165j</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Build an Agentic SaaS App in 5 Steps: From Idea to Revenue (AIM104)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, John Huehn and Steve Bixby from Pega Systems demonstrate building an enterprise-grade agentic SaaS application using Launchpad in five steps. They showcase creating a mortgage pre-approval application for a fictional fintech called HomeLend that sells to banks. The demo illustrates using Gen AI-powered blueprints to design workflows, deploying on AWS infrastructure, adding enterprise capabilities like human approval rules for loans over $5 million, making it available to multiple bank subscribers, and showing live customer usage through a conversational interface that reduces processing time from days to seconds for customers like Sarah seeking home loan pre-approval.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/C2AcLKLATFI"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73lrslwuacm8kddlf6kw.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: Building an Enterprise-Grade Agentic SaaS Application with Launchpad
&lt;/h3&gt;

&lt;p&gt;All right, hey everyone, I'm John Huehn. I'm the Chief of Launchpad for Pega Systems, and with me today is Steve Bixby, who is our Senior Vice President of Product Engineering. We are delighted to be spending the next 20 minutes with you building an enterprise-grade agentic SaaS application that can run a complex and regulated process and be sold to thousands of B2B clients in just five easy steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=30" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3727v6623ohgd8jw7rx.jpg" alt="Thumbnail 30" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=50" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg59n7cvmdz8ivpusj8b.jpg" alt="Thumbnail 50" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those steps include making the design with a Gen AI-powered blueprint, making it real here on AWS, making it work with enterprise capabilities, making it available to your subscribers, and making it rain with real live customer usage.  We're going to demonstrate how a fintech that we're going to call HomeLend can build and sell to banks an agentic application that will dramatically accelerate processing and deliver a great customer experience for a scenario that many of us may have gone through ourselves or may one day go through in our lives, which is mortgage pre-approvals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjhg975gbtt1e7dkmqaa.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, banks are super excited to see their work get automated, but they also can't afford to have AI introduce variability into important decisions like creditworthiness. That's where a lot of agentic applications struggle. They have armies of agents trying to manage other armies of agents to deliver an outcome, and of course that just naturally breeds all kinds of variability, which can't exist in financial decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmth20mql7zihmq7k9e0n.jpg" alt="Thumbnail 100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1a2g1tugil3vncv0p8z.jpg" alt="Thumbnail 120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So today we're going to use workflows to manage our AI agents to deliver a controlled, deterministic, predictable outcome, the kind of outcomes that regulated industries like banking actually love.  To build our application, we're going to use Launchpad, the AI-powered low-code app development platform from Pega. If you've never used Launchpad before, what you should know is whether you're looking to build a new application or enhance maybe a legacy application with workflow or Gen AI into it, or you're looking to replace or re-platform a tech debt-laden old application with something new and modern that you can innovate on faster, then Launchpad will dramatically accelerate your speed to market, reduce your development time, and reduce your costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=190" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl54drxklvv8r4716l1ju.jpg" alt="Thumbnail 190" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It provides a vibe development experience that helps get your app started with a managed database and secure, scalable cloud powered by AWS, with industry-leading workflow, integration, and UX and reporting capabilities, and with pre-built subscriber management, administration, and configurability, all with a usage-based pricing model that scales with your business.  Ultimately, Launchpad gives you everything that you need to build and sell an enterprise-grade application without the cost and complexity of managing your cloud, your security, and of course code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=210" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wb6fw3earqm8rby07sh.jpg" alt="Thumbnail 210" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we're going to get started. Here's our scenario. HomeLend, our make-believe fintech that builds and sells apps to banks, has identified a gap in the mortgage pre-approvals process that impacts people like Sarah.  Sarah's dream house just came on the market, and she is super excited. She very quickly booked a showing of the house, contacted her bank, and provided all of her documentation to get a mortgage pre-approval and the corresponding letter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=220" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvbtt3xwd0p27lt9dx46.jpg" alt="Thumbnail 220" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3c3c9fcumzn2ifdcrs8.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But unfortunately, her bank took two days to process her application and get back to her with that letter.  Sadly, during that time, somebody else bought Sarah's dream house. Imagine how disappointing that is, and HomeLend understands that pain. So they are going to build an agentic SaaS application for banks to reduce the processing time from days to seconds and deliver a great experience for folks like Sarah.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=260" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6myulm9w5dsm9z2xzkc.jpg" alt="Thumbnail 260" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Something that actually looks like this, an agentic SaaS application with a conversational interface. So Steve, I'm hoping I can count on you now to play the role of HomeLend, and I want to build an app that manages the mortgage pre-approvals process and other loan origination stuff. All right, well, that's a challenge. We're going to do it. Hopefully the Wi-Fi holds up. We're going to build this real quick. Last session of the day, thank you all for being here.&lt;/p&gt;

&lt;p&gt;Okay, John, could you actually just maybe say that one more time? Okay, Steve, I want to build an app that manages the mortgage pre-approvals process and other loan origination stuff. We got it. All right, brilliant. That's enough to get started with this process. So here we go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=310" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3xw6ysl8l9chgt8d5aj.jpg" alt="Thumbnail 310" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=330" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk60ff43mbh0osfwsprdo.jpg" alt="Thumbnail 330" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Demo: Creating HomeLend's Mortgage Pre-Approval Application from Blueprint to Deployment
&lt;/h3&gt;

&lt;p&gt;We're in Launchpad.  If you want to see Launchpad and try it yourself, we're at the Pega booth just over there. Everything I'm showing here you can do for yourself. Check it out at launchpad.io. All right, so it said perfect, what's your application name? This is HomeLend, so HomeLend.  And here we go. I can now build your loan origination application, which will include a mortgage pre-approval process. It's identified the industry, the location, and all the things that are needed to build an application. Obviously, we haven't really provided enough context, but it's giving us the option to start building. So for the sake of time, let's just go for it. Let's build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=360" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sbot5attjjjvqf51xci.jpg" alt="Thumbnail 360" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=370" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhruzl9chfiecdkfrn43u.jpg" alt="Thumbnail 370" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right, so it now says I'm building your application.  I'm starting with an industry template and customizing the workflows, connecting the channels, setting up the personas, and configuring the data and integrations. Boom. All right, awesome.  So the chat is now pushed over to the right, and we're presented with the development environment for Launchpad here. You can see that our launch readiness is about a quarter of the way done. We can keep an eye on that as we walk through the process here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=400" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesf73lja4jwfkpojr76i.jpg" alt="Thumbnail 400" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it's identified that we are this HomeLend corporation. We're in the US retail banking, and this is a loan origination application. Great. And if I click here in the middle of this diagram, this is going to show us the workflows that it  created. So it created eight workflows, it looks like. Here they are, and mortgage pre-approval is in fact one of them. That's good. The rest of them are the other stuff. That's right, lots of other stuff. That's good. That's what we asked for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=420" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6s0vmjoks6g2n5uzqthn.jpg" alt="Thumbnail 420" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look at the mortgage pre-approval process. All right. So what Launchpad does for you  is it takes that business process and breaks it into a series of stages and steps to organize the business process. I can now interact with this, add steps, add stages, drag them, drop them, move them around, and update this. But if we look at it, it looks like we've got a collect application data, verifying information, and then we go through financial assessment, pre-approval decision, and then ultimately finalization. So this actually looks pretty good in terms of delivering what Sarah's looking for in that scenario you described.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=460" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrbpkeiignhmqoq2ltod.jpg" alt="Thumbnail 460" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's make a few changes. Actually, before we do that, let me show you the data model over here. So it's also generated a data  model for us. This thing's generated a full UI. It's all there. We're going to see it in a minute. Again, I can update this and make changes to it as needed. You see, we've got an applicant name, date of birth, W-2, annual income, all the things that we would need for that mortgage pre-approval process, all created with generative AI using the Bedrock models from AWS.&lt;/p&gt;

&lt;p&gt;So let's go back to the lifecycle now. Let's make a change, John. What should we change? So I think Steve, maybe we want to get some human eyeballs if somebody's going to ask for a pre-approval maybe over five million dollars. Five million dollars would be good to have somebody actually take a look at. So in the pre-approval decision here, I could just type this over on the chat on the right, but I want to show you a cool feature here where I can actually do it inline, where I can just click on the area of the screen that I want to make the change to, and I can say configure the system so that any loan request over five million requires human approval, period. Boom. Nice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=540" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsufjjvqezfzslakjc2gn.jpg" alt="Thumbnail 540" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=560" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehqo8l7407hg5andff7k.jpg" alt="Thumbnail 560" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So awesome. So this just inserted this human approval step, or it's a decision actually to determine if we need a human approval step. And if you want to know what did this actually do behind the scenes, I can open the left-hand panel here  and it's going to show all of the assets that have been created that support this application. And here's that human approval down here at the bottom. It's a decisioning rule. It's private and it generated an expression for loan amount greater than five million dollars. So that's awesome.  All right, let me just close this just to show you can work with the full screen here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=570" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dy17wlghugs4eddz6u9.jpg" alt="Thumbnail 570" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=590" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvq6vlbmqchuaq35ovcp.jpg" alt="Thumbnail 590" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's make one more change. Let's add an  agentic step, or at least a placeholder where we can insert one. So let me see. Let's integrate the AWS Nova agent to automate the verification of the employment through the approved third-party service. So  you can add decision steps, you can add human steps, which would require a screen or some sort of interaction, whether it's through self-service or maybe through a contact center agentic conversation.&lt;/p&gt;

&lt;p&gt;You can add agent calls, you can add decisions, automations, all of that here to build out this application. Now because we have such a short window, we're going to say, all right, this is probably what we need from here. Let's preview it. Let's see what does this application look like to a potential end user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=630" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1xk2ao6k81gxxl0dw8y.jpg" alt="Thumbnail 630" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=650" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkghytj74o55iw1d655z.jpg" alt="Thumbnail 650" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=670" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex041189eae9tmgp7ftc.jpg" alt="Thumbnail 670" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=680" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu45pelyx95s4ebtyj59i.jpg" alt="Thumbnail 680" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right, so here we go. This is a, it even says right here, preview, Demo Bank. It's  not real. That is not real. This is demo. This is how you test the application now, and I'm presented with this virtual assistant that says, Hi, I'm your virtual assistant. How can I help you today? I'm going to tell it, hi, I'd like to get pre-approved for a home loan.  Says, sure, I'd be happy to help to get started. Could you please provide your full name and date of birth? Of course, my actual name, Steven Bixby, June 14th, 1983, not my actual birthday. I was gonna say I did not realize how much younger you are than me, Steve. Thank you. What loan amount  would you like to be pre-approved for? I'm looking for $1 million.  Got it to proceed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=700" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2fagblievxsy9xnf569.jpg" alt="Thumbnail 700" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=710" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpxf7vs91xs1gz87pn33.jpg" alt="Thumbnail 710" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=720" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf5f5bbj0c06rv7vr9mm.jpg" alt="Thumbnail 720" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So do you understand what this is doing? It's actually going through the first step of the process and trying to collect the information that's necessary. To visualize that further, in this preview portal here, we can say, okay, that's what it's gonna look like through this self-service chat, but what would it look like  in like, in the bank back office in a desktop application. So this is the application that it's created for us, and you can see there's a dashboard,  there's chat, there's all kinds of stuff that's going on here. And I can actually create any one of these workflows right from here or instance of the workflow,  including that mortgage pre-approval.&lt;/p&gt;

&lt;p&gt;And now I'm gonna see those fields that were prompted in that first step of the first stage, which included the applicant name, the date of birth, email, phone, annual income, and the W-2 file. So that's pretty cool because you just built the core logic of the app. And we're showing it in the conversational interface, but you can actually do the same thing on the employee desktop and it also would work on your website for self-service, mobile app, mobile app, you know, build that logic in the core in the center one time and it deploys across all of your channels. Exactly.&lt;/p&gt;

&lt;p&gt;All right, so let's assume that this app is perfect. There's all kinds of things that we probably would want to go tweak to make this our own, make it actually serve the purpose that we need. But from here, I think we can, let's launch this thing. So what's unique about Launchpad is that it's built specifically for organizations that are building products, SaaS products that they are going to then go sell. So we're, this is, we're a provider here, and we're gonna go sell this to our subscribers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=790" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6uqwp8nfzcsd9hepi59.jpg" alt="Thumbnail 790" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=800" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7wtf6w471cdj4dkjvta.jpg" alt="Thumbnail 800" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg6oti2lvxy5u6w5s8xm.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So when you launch it, it's actually gonna take this blueprint of our application and make it real. So it's,  doing the domain data modeling, the workflow, the UI screen generation, the integration setup, the security configuration, all of that to make sure that we  have an app that we can actually deploy so that our subscribers can use it. So I'm presented now with my list of subscribers, like HomeLend subscribers, there's five banks here. I'm just gonna say, let's deploy it so that all of them have access to it. And let's deploy this application. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=840" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16wqfuw5z4qfymo1840x.jpg" alt="Thumbnail 840" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-World Application: Sarah's Success Story and Closing Remarks
&lt;/h3&gt;

&lt;p&gt;All right. So now that I've done that, made this available to all my subscribers, I'm looking at my subscribers dashboard. So as a provider, I can now see what's happening with other apps that I've built and that have been deployed. I can see the usage, I can see even the revenue, things like that. And right here, I'm  seeing those banks and seeing that the HomeLend subscription is active for all of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=870" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwhrqlsqmeuwdbhh0y99.jpg" alt="Thumbnail 870" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=880" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkee9pmpv3diqngkpgaf9.jpg" alt="Thumbnail 880" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now that we've built this application, I'm gonna do a little bit of fantasy here. We're going to flip to one of these banks' websites and assume that they've already deployed this virtual assistant on their website, so it's no longer the, you know, this is not real bank, it's actually the UBank or U+ Bank website. So here we are on their website. They've, they've deployed  this assistant here, and I can click on it and it says, I'm your U+ Bank virtual assistant. How can I help you today? So very similar to what we  saw in the preview in the development environment, we're now seeing live for a real subscriber.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=900" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneylgp5rwq1l9ttk09mh.jpg" alt="Thumbnail 900" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this time, I'm going to say I'm Sarah, right? So let's be Sarah in this use case. And hi, I'd like to get pre-approved for a home loan. All right.  Sure, happy to help. Get started. Please provide your full name. So this is gonna take us through that same process again. I'm gonna say I'm Sarah this time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=930" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn28dmkg4nhu2k4a85sc6.jpg" alt="Thumbnail 930" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=940" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhptpvymhzkopfjl82we.jpg" alt="Thumbnail 940" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=950" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma33m81nhpsg9n1d0dmg.jpg" alt="Thumbnail 950" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt611og3q3e1rhqssdxi.jpg" alt="Thumbnail 960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sarah Wilson, March 18th, 1974. Thank you. What loan amount would you like to be approved for? Let's do a million dollars again. All right, everybody, a million dollars for Steve, a million dollars for Sarah. Everybody a million dollars. And now it's asking for the W-2, same as we saw before. Let's actually do that this time. Let's  upload the W-2. Submit that  W-2s received securely. I'm now evaluating your finances to determine pre-approval amount. I'll have the decision for you shortly. Here we go. Fingers crossed. Let's hope Sarah gets approved.  It's not over $5 million so that rule's not going to be in play that we added. Boom, congratulations. Awesome. You've been pre-approved. Would you like me to email? Yes, definitely.  Check your inbox. Is there anything else I can help you with? Let's say that was it. Thanks for getting back to me so fast. Awesome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=990" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3s5m358yy668d1onrdy.jpg" alt="Thumbnail 990" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1010" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftc82h8w6oniscmu65hqq.jpg" alt="Thumbnail 1010" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right, John, so I think I did it. I built the application. I think Sarah's going to be happy. So that flip back. That is awesome. Thanks so much for sharing all  that with us. So I'm just going to quickly recap, going back to our five steps with what Steve has shown us. So the first thing, Steve, you did, you made the design, leveraging vibe development to get your, to create your Gen AI powered blueprint. And then you made it real, secure, scalable on a pre-built and already-built  AWS architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1020" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wq43bh0xm6ttq93cmt3.jpg" alt="Thumbnail 1020" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1030" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmr0dzl0nmsmiqvibkuy.jpg" alt="Thumbnail 1030" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1040" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleo3gi656gomijpret1x.jpg" alt="Thumbnail 1040" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You made it work with enterprise grade capabilities and with workflows managing your AI agents.  And you made it available to your subscribers like UBank to configure and deploy to their customers, and you made it rain with real live customer usage from folks  like Sarah who are using the UBank deployment of the app. So in five simple steps  you built an enterprise grade Agentic SaaS application that dramatically shortened processing time and delivered a great experience for Sarah, and you took a cumbersome bank workflow and you enabled that to be automated to deliver an experience. It's for Sarah, whose story can now finally have a happy ending.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1070" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdrwogrk0vgl4jikjiv3.jpg" alt="Thumbnail 1070" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1080" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mq0hvxcp9xacf8dumx8.jpg" alt="Thumbnail 1080" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1090" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxonjnlcqb8ers6emnqh.jpg" alt="Thumbnail 1090" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzas9j09w301ya75m12m1.jpg" alt="Thumbnail 1100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So there's really not much more for Steve and I to share, except to say if you have any questions about anything  that we have shared today or if you want to learn a little bit about the amazing companies that are already building Agentic SaaS applications like Steve just showed on Launchpad.  Or if you want to learn about our security capabilities and how our full technical review can accelerate your path for building an application and  getting it onto the AWS Marketplace, we would love to see you at the Pega booth, which is number 366 just over here. Or if you'd like to get started  with Launchpad today, just go and get started on your own for free. You can just click that get started button and start with easy step number one, designing your app with your Gen AI powered blueprint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=C2AcLKLATFI&amp;amp;t=1120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka5jvkfugj98l92e2jaj.jpg" alt="Thumbnail 1120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So with that, we will say thank you all very much for spending this time with us today, and we hope you enjoy the rest of your  time here at AWS re:Invent.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - How Mary Technology is building the legal Fact Layer for agentic AI on AWS</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:30:14 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-how-mary-technology-is-building-the-legal-fact-layer-for-agentic-ai-on-aws-3266</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-how-mary-technology-is-building-the-legal-fact-layer-for-agentic-ai-on-aws-3266</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - How Mary Technology is building the legal Fact Layer for agentic AI on AWS&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Dan, CEO of Mary Technology, explains why large language models fail at legal document review for dispute resolution. He identifies four key problems: lack of training data due to sensitive information, LLMs being compression machines that lose critical legal nuance, facts not being readily extractable from uploaded data (like disambiguating "A. Smith" or "PT" for patient), and lawyers needing confidence verification rather than just answers. Mary solves this through a fact manufacturing pipeline that treats facts as first-class citizens, extracting entities, dates, and events with full explainability and provenance tracking. The platform achieved 75-85% time reduction in document review and a 96 NPS score, working with major firms like Arnold Bloch Leibler.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/cptduBASjRU"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y30g0law8ojemtzhsu2.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Large Language Models Fall Short for Legal Document Review
&lt;/h3&gt;

&lt;p&gt;Hello everyone, my name is Dan and I am the CEO and co-founder of Mary Technology. We're a legal tech firm based in Sydney, but now with a global presence, and we help law firms automate document review. That's a major challenge for large language models, and I want to talk to you today about how Mary is trying to solve that. Just before we start, can I ask how many people here are heads of legal operations inside of large enterprises or have your own law firm? Yeah, okay, great, cool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=50" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr49vwpq7vqimt2wp3ucc.jpg" alt="Thumbnail 50" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=60" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0wnvntb8vnkdv35qeak.jpg" alt="Thumbnail 60" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So here's the problem. Large language models, even with Retrieval-Augmented Generation or agentic frameworks,  are not fit for purpose for legal dispute resolution workloads. There are a number of problems. I'm going to talk about four today, the first one being the training problem. So what do I mean by the availability of training data? The sorts of data that we work on every day for law firms and legal teams that deal with disputes are very sensitive, and so this sort of information isn't available publicly, and you certainly can't collect and train on that data when it contains sensitive information from law firms' customers or your internal employees.&lt;/p&gt;

&lt;p&gt;The second challenge is that there isn't a single right answer to tell a large language model to sort of trend towards, because there's always at least two sides of a matter. And so you can't just say, hey, here's the right answer and try and move and probe towards that. You have to include and understand all of the potential available narratives and correct answers depending on which side you're representing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0kyo12fy7oa5qikpju3.jpg" alt="Thumbnail 120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=130" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrshwp4l9rnrdkqsk6rc.jpg" alt="Thumbnail 130" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second problem, and maybe the biggest one, is that large language models are compression machines. That's what they do really well, and I'm just going to talk to you a little bit about some of these stages of compression.  So the first thing that a large language model does when it receives a document is it ultimately turns that page into an image. And then if that image has words on it, or an image, even if it's actually just a picture, it will still ultimately convert that into text. But particularly in legal documents where you actually have lots of words there, it's going to take away and remove some of that nuance, legal nuance and important meaning that may be present there, so things like handwriting or a small note on the side.&lt;/p&gt;

&lt;p&gt;Once it's turned that document into text, it then ultimately turns into tokens and then into embeddings, then a contextual compression, and ultimately something that's capable of chunking and summarization. Each layer of that compression removes meaning, and it's that meaning and nuance that's so important to law firms and anybody trying to understand a dispute or facts, which is sort of the core of a dispute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=190" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vptdwtex3e11s5j6p6.jpg" alt="Thumbnail 190" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what they're really good at though. We're not saying large language models are bad, we're actually saying they're really, really good, but they're particularly good at being generally capable. So they handle a massive range of tasks, they scale across massive corpuses of documents, and they generate fluent, plausible text without deep preprocessing. So they're generalists and they're really good at that, and so you can, as you can probably tell, this slide was written by an LLM. It's done its lovely emojis and done a very good job of telling me what the slide should say.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=220" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4qh9zcwylc1v4m50iqv.jpg" alt="Thumbnail 220" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=240" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4pdpql9ear2msnof1i9.jpg" alt="Thumbnail 240" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge of Facts: Context, Provenance, and the Limits of Compression
&lt;/h3&gt;

&lt;p&gt;The third problem, facts are not in the uploaded data. This is a bit of a strange one and something that might take a small amount of explanation, but the facts are what is at the core of a legal matter or a dispute.  So I'm just going to give you an example. Here's a fact that might be present inside of a document. It gives the date, which is wonderful, and then it says A. Smith reported an error.&lt;/p&gt;

&lt;p&gt;So here are a few of those challenges about why you can't just extract this piece of data and assume that it's ready in its current state to be used for downstream processing using AI. What if there are multiple people within that corpus of documents called A. Smith? One could be Alice, one could be Andrew, but in order to make this a useful or meaningful fact, you actually have to understand which A. Smith that is. And by just using a large language model, you can't do that.&lt;/p&gt;

&lt;p&gt;We're in the US today, so where we're from, Australia, these two dates are completely separate, so this could be the 5th of January, February, March, or yeah, there we go, 3rd of May. So you've got to try and understand what is the context of this matter, so that you can actually say, great, it's probably this date. In this matter, is a reported error on this date even important?&lt;/p&gt;

&lt;p&gt;That's obviously more to do with context, so maybe this isn't a relevant fact for you to understand and dig into more. It's also fragmented. How many times is this particular fact mentioned throughout all of these documents, and do they potentially conflict with this fact or support it? And finally, provenance. What kind of document did it come from? If this is, for example, a primary document, maybe it's come from somebody detailing what a CCTV camera saw, or if it's from hearsay from somebody's statement that was given to a police officer, for example, all of those have different meanings as to how relevant and meaningful they are when related in court or in a litigation process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=350" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6x6esh55x8dadfo7id90.jpg" alt="Thumbnail 350" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's another example. This is actually from my co-founder Rowan's medical documents, and there's a couple of challenges in here that I'll actually hopefully show you how Mary, our platform, deals with it just a little later. But the one thing I'm going to point out here, and this is incredibly common, is PT. What it actually means here is patient. Now, what would happen if you put this fact into a large language model? Well, it wouldn't understand that it's talking about Rowan, who is the person it's actually referring to, or rather the patient itself. So you need to actually converge and correct things like this piece of information so that you can actually leverage that information later when you're trying to use it within a fact or rather document review process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=400" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt7sfdlns1rfiafjxrf2.jpg" alt="Thumbnail 400" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a more drawn out example of something that a large language model  would do incredibly poorly in comparison to a system that is designed and built to support litigation workflows and the document review that's as low fault tolerance as law. So imagine I've written a letter. Within it, I don't write my name, and I don't say who I'm writing it to. And I detail out a crime I've committed, but I don't necessarily say it in the simple fact that, you know, I don't put that I stole that car. I say it in some colloquial way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=440" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhczvs8y7hi4n7udlh48.jpg" alt="Thumbnail 440" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The challenge for a large language model, if that document is placed in between 4,000 other documents,  if you were to ask a large language model, did Daniel steal a car, well, it wouldn't ever be able to say yes because Daniel's not mentioned, and I didn't say that I stole a car, and I also don't say who I've written it to. What Mary, or rather what any tool that's going to do this type of work in a document review process in law needs to be able to do is do things like look at the handwriting of that letter. Is that handwriting present in any of the other documents? Can we understand who actually wrote that? Also, maybe I wrote on it a date when I went to the park. We need to be able to understand that in this other document over here, Daniel said that he went to the park on that date and then try and understand that, hey, actually, maybe we can draw a conclusion here and say, brilliant, maybe you should review whether or not this is Daniel because we've got some supporting evidence. So that's an example of where large language models just fall short in this type of work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=500" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pb040a3hg45kgepsn4t.jpg" alt="Thumbnail 500" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Confidence Through Verification: Treating Facts as First-Class Citizens
&lt;/h3&gt;

&lt;p&gt;And the last problem  is that even if I did all of that fact extraction perfectly, that's not really what lawyers and legal teams need when undertaking an investigation. They actually need to feel really confident about those facts and the narrative that they're going to present on their client's behalf or for their company. So here's an example. Is anybody a lawyer? OK, well, we've got one, brilliant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=530" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp65ze6018i81wqix42y.jpg" alt="Thumbnail 530" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this is just an example, hopefully that gets you thinking as to why this is so important, but it's an exercise, the perfect letter of demand. So imagine a large language model spits out to you and says, here's a perfect legal document, whatever it is, and I've done the work for you. I've gone through this entirely massive corpus of documents. I've extracted all of the facts. I've reviewed what's relevant in the context of the case, and I'm now going to give you the perfect document, in this case, a letter of demand. I can assure you it's the ideal letter to file. It's supported with the perfect evidence. It's in your template that you normally use, all of that good stuff. Please now go and file it with the other side or with the court. Would you go and file that?&lt;/p&gt;

&lt;p&gt;That's the correct answer, good. You can't because you need to, ultimately you have an obligation to whoever it is that you're representing, but more importantly, you have a responsibility to make sure that you're confident about doing that, with the action that you're taking. And so, unlike a large language model that is perfectly built to receive a question and then deliver a correct answer, what's required in this type of work, in this document review and litigation workflow, is something that doesn't know what the question's going to be, yet can understand all of the facts and give you all of the potential narratives for you yourself to review and verify and become confident in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=620" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqf4d1e1x6hny0iymrpk.jpg" alt="Thumbnail 620" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=630" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxkjjihbehdwn60lxwii.jpg" alt="Thumbnail 630" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how do you fix all of these problems? Well, in a way that large language models simply don't like, because it's incredibly  process heavy and it's not a generalized task, it's very specific.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=670" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6035odztveioqdpryhdq.jpg" alt="Thumbnail 670" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=680" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftok269f8tp39tayfhlsd.jpg" alt="Thumbnail 680" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the first thing you've got to do is treat facts as first-class citizens. So in the same way that a large language model says the most important thing to us is having an incredibly efficient embeddings model, a fact review platform needs to have the best fact model and say, right, great, we're gonna take those facts and we're gonna process them and make them, put them through this manufacturing pipeline that's incredibly heavy and deliver to you something that you can rely on, and then ultimately verify, which is why you need a world-class review and verification experience. This is where the lawyer or the  team representing or trying to undertake this investigation, this is where they go to review the facts that have come out, build their narratives, and more. And finally, and this is maybe  the one piece that I think is missing from what I've spoken about before, is you need to then take this layer of facts that you can feel confident about and pipe it through to these downstream AI applications, so things like OpenAI or any other unified interface, you can just pipe them in there and have them working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=710" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6thkec2p8zbgvdest3kp.jpg" alt="Thumbnail 710" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=720" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmacq4mvpbrigw6fciqm.jpg" alt="Thumbnail 720" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Mary Technology's Solution: A Fact Manufacturing Pipeline for Legal Workflows
&lt;/h3&gt;

&lt;p&gt;Okay, so I've got a short video to show you how we've solved some of these problems. As a lawyer, when you receive a case,  your first goal is to get the facts straight. But this is never straightforward.  It means digging through endless emails, PDFs and records, splitting documents, cross-checking dates, piecing together a clear timeline. It's slow, it's manual, and it can take anywhere from hours to days before you've even started the legal work. We call this fact chaos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=760" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk7i5cn8krfri2szg1ny.jpg" alt="Thumbnail 760" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But what if the moment a case landed in your inbox, everything was set in motion? We could take the attached documents in the email or find uploaded documents in the tools you already use, then scan and process them. The messy bundled files could be split into clear, structured documents. They could be categorized, renamed,  and seamlessly organized back into your workflow, exactly where you need them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=770" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis7pubexj63aw9tindeu.jpg" alt="Thumbnail 770" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=780" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomfxl34ymoy3nj60aqws.jpg" alt="Thumbnail 780" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=790" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk459zhoknzivoe7ar80o.jpg" alt="Thumbnail 790" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=800" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlrz41vg9hatvlip2nhw.jpg" alt="Thumbnail 800" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But what if organizing documents was just the start? What if we could unblock you completely  so you can get started on the real legal work? We can pull key entities from every document, names, businesses, and their role in the case,  giving you instant insight into exactly who matters. Find and capture significant dates when events occurred. Get a concise case summary, distilling  the entire matter into a few clear paragraphs. Identify gaps that need assessing, detect possible data leaks, build a timeline of events, and extract any other key  details relevant to your case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=810" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftn345tvfz9gn8ta4myrs.jpg" alt="Thumbnail 810" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xxz6w413iyyvj09l4sr.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=840" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjessdu16egqb73a1d7xt.jpg" alt="Thumbnail 840" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then bring all these insights together in a single dashboard, so anyone can get a firm grip on a case in minutes, even if they've never seen it before.  Delve deeper with generated chronologies, surfacing only what's most relevant to your case. Invite experts to work  alongside you and your colleagues in real time and draft directly into the tools you already use. Because Mary connects with your existing systems, it adapts as new evidence, events, and documents emerge, keeping your case aligned every step of the way. When the facts are clear, decisions are faster. Fact chaos,  solved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=850" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c18ktnc3ubrt2whz4be.jpg" alt="Thumbnail 850" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, so just to conclude  what I'm trying to get at there and hopefully that you could see in the video, it requires a novel approach. And interestingly, we couldn't use what most people can, which is Retrieval-Augmented Generation or agentic workflows to just go into the documents and extract the facts that are meaningful and present them to a user. So that's what I'm sort of saying up here, we can't just use good enough, it has to be brilliant.&lt;/p&gt;

&lt;p&gt;And so we built a fact manufacturing processing pipeline, where we extract every event, entity, actor, issue, loads of other stuff. Ultimately imagine a fact as like an object, where it has lots of metadata underneath it that allows you to build relationships and construct a case, almost as a digital case as an object. So it will then do things like tell you whether or not a fact contradicts with another. And then the important part here is that&lt;/p&gt;

&lt;p&gt;every piece of that metadata underneath that object has to be explainable. So we'll surface and expose any rationale if we make a single decision. If we make a decision, we tell you a date, we're going to tell you how we got to that date. If we're going to tell you something that's relevant, we're going to tell you how it's relevant.&lt;/p&gt;

&lt;p&gt;Only after producing that high quality fact layer do we then use these more traditional, or not traditional, very new technologies, but the more standard technologies such as RAG and agentic frameworks. And the result is a persistent auditable fact layer that you can rely on both in the platform itself when you're doing that investigation or downstream when you want to pipe that information down and use it when you're drafting or other associated legal tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj79iimhoa8ynylq5c0w3.jpg" alt="Thumbnail 960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I'm just going to show you very briefly what the platform looks like for a single fact, just to highlight that challenge before when I spoke to you about  patient. So you can see here there's a fact at the top. So you can see that there's a date with a time, and it's talking about a chap called Rowan McNamee is reassessed, swelling to right bicep remains. You'll notice that that's an incredibly summarized and concise fact. That's because that's what lawyers need. They need to be able to have a look at all of these facts because the majority of them won't be relevant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=990" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feewpashxvx44tejdtm3a.jpg" alt="Thumbnail 990" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if we just zoom in on, I've, imagine I've pushed my mouse over to the right-hand side and I've hovered over that relevance, and it's going to tell me, the  entry focuses on a separate medical issue. Now bear in mind, I know a lot of my examples have been in personal injury, but this is for employment or any type of law you can do this with. But in this particular case, personal injury. So it's going to tell me why this isn't relevant. But then I'm able to dive deeper if I think it might have some relevance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=1010" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9gf33nxj8q9igqfxm3t.jpg" alt="Thumbnail 1010" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see I can pull up the actual  document on the exact page and space where that fact has come from, and I can also rely on Mary to give me more rationale as to how it came up with this fact and where there's more details that I can replace the fact with if I want more information. But you'll notice that this handwriting's terrible. I mean, well, it's pretty good handwriting, but yeah. It works on unstructured data primarily rather than things like contracts, where all of the information is very easy to get out. We have to focus on the documents that are really difficult.&lt;/p&gt;

&lt;p&gt;But the reason I bring this up is because if we have a look in that document, that's where it's from, PT or patient. Well, we don't just rely on that, and this is just one of those elements where we correct the fact as we go through that pipeline. We say Rowan McNamee, so that ultimately when we pipe this fact down into another downstream AI capability, it knows it's looking for Rowan McNamee. So when you say, hey, did Rowan ever go into a hospital with this, it can say yes and be confident, and you can go directly back to where that was found.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=1070" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuagh7c1tx8atdj00sin2.jpg" alt="Thumbnail 1070" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So just quickly on  on where we're at in our journey, we work with many of the largest firms now in Australia, including Arnold Bloch Leibler, who's one of the largest law firms in the world, both here and over in the UK and everywhere else. But we're bringing on more firms every single week. Across all of our customers, we have achieved a 75 to 85% reduction in time spent on this, probably the biggest bottleneck in litigation, which is document review. It's where so much of the time is spent, it's where so much of the cost is accrued, and we're reducing it significantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=1120" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbksghtouv3lv11lojc6.jpg" alt="Thumbnail 1120" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And overall, we've achieved a 96 out of 100 NPS score. People really love using Mary because this is one of the most difficult, annoying, frustrating jobs that you can do as part of this process, and so people love it. Here's just, I'm just going to leave this up on the screen briefly. It is a little bit Aussie, but this is, and I've had to redact the name, which is why there's a little dot here,  but that's what one of our customers has said about how they use Mary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cptduBASjRU&amp;amp;t=1140" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fun46o4wobw8jwrcyinde.jpg" alt="Thumbnail 1140" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I'm open it up to questions if anybody's got any, but that's Mary Technology and how we're building this fact layer. Any questions? No? Cool. Thank you &lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Can Your AI Show Its Work? Healthcare's Critical Imperative for Explainable AI</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:30:09 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-can-your-ai-show-its-work-healthcares-critical-imperative-for-explainable-ai-4pi7</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-can-your-ai-show-its-work-healthcares-critical-imperative-for-explainable-ai-4pi7</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Can Your AI Show Its Work? Healthcare's Critical Imperative for Explainable AI&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Jed from Dataiku and Kaoutar from Sanofi discuss how pharmaceutical companies leverage AI to reduce the 15-year timeline from molecule discovery to market. Sanofi uses AI across their entire value chain—R&amp;amp;D, manufacturing, and commercial—with 70% of small molecules now utilizing AI. Kaoutar explains their RAISE framework for responsible AI, emphasizing explainability in a regulated industry. Their AI Foundry combines AWS, Dataiku, and Snowflake to ensure AI-ready data with proper lineage, traceability, and bias control. She stresses the importance of central governance with executive accountability, requiring business sponsorship and three-month value demonstrations for all AI initiatives. Rather than following hype, Sanofi focuses on scalability, integrity, and impact, using both classical AI and Gen AI where each makes sense. The conversation highlights that in pharma, explainability is critical because patients need to understand why medicine works before putting it in their bodies.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/1HGaUBD6CsA"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=1HGaUBD6CsA&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7blne13urjriylek4tfb.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Sanofi's AI-Powered Vision: Accelerating Drug Discovery Across the Value Chain
&lt;/h3&gt;

&lt;p&gt;Hi everybody, I'm Jed. I'm the SVP of AI and Platform at Dataiku, and I'm here today with Kaoutar from Sanofi. We're going to be talking a bit about explainability, about data pipelines, and how the pharmaceutical industry is leveraging AI today. So can you start a little bit by educating us on how Sanofi is using AI today and, more generally, what it means to be a big pharma? What are your goals in this industry?&lt;/p&gt;

&lt;p&gt;Yeah, first, Sanofi's statement is that we are an R&amp;amp;D biopharma company powered by AI at scale. So basically, our ambition is to shorten the distance or the timeline between the therapy and discovery, because usually it's a little bit 15 years between when we start to search the molecule to have it in the market. So AI for us is one of the technology and opportunities to shorten this distance.&lt;/p&gt;

&lt;p&gt;So what that really means in layman's terms is that you're having to make a really big bet on a couple of molecules, right, at any given time, and it takes a very long time to figure out if that bet is going to pay off. Speaking of Las Vegas and betting, this is a very bet-heavy industry. So how can AI help optimize that entire process? Where does it fit in?&lt;/p&gt;

&lt;p&gt;So first, the value chain starts from, of course, the R&amp;amp;D to manufacturing supply, then the commercial side, and then employee experience. So our bet was first on AI. It's everywhere. We didn't choose to put it on R&amp;amp;D or on commercial or on manufacturing. It's all on that. And of course, you do have some quick wins where you see the outcome of AI quickly. For others, like the research, it's long term.&lt;/p&gt;

&lt;p&gt;What's a good example of a quick win that you've seen? So quick wins, for example, can be on commercial or on manufacturing. If we can take the example, for example, on yield optimization, that's something that we see in real time using AI, how we can optimize our supply chain, how we can make a connection between scientists and the shop floor. Those kinds of things are quick wins, or even the sales reps, how they can make decisions faster, how they can push the right content to the right HCPs. And yes, for the long term, it's more on the discovery and the research of molecules. But even on that, today we are using AI on 70% of our small molecules, basically the chemical ones.&lt;/p&gt;

&lt;p&gt;Wow. And when we say AI, are we talking about classical AI, Gen AI, both? Where's the delineation there? So it's all of them, basically, because for us it's not about hype. It's about scalability, integrity, and impact. So it's both where we can put the technology for the service of the impact and the outcome. So there are some use cases and some domains where the classical AI works well. Others, Gen AI, it's a huge opportunity, and AI agents are more better for other use cases.&lt;/p&gt;

&lt;p&gt;Maybe, for example, I can take an example on Gen AI. Like everything relating to today regulatory submission, Gen AI is a huge opportunity. It accelerates filling documents. It accelerates putting the right information on the sections, and even the regulatory today, they are open to that. On others, like yield optimization, classical AI time series ML works well. So why go into Gen AI? Because the cost, the effort, and adoption are more complex.&lt;/p&gt;

&lt;p&gt;So you've invested for 30 years into classical AI. There's no reason to throw all of that out just because Gen AI. Yeah, exactly. But it doesn't mean that I don't have requests coming from the business. Oh, we see opportunities on Gen AI. Can we use Gen AI? You have objectives that will help you care about if Gen AI is the most suitable thing or classical AI or even just advanced analytics, right?&lt;/p&gt;

&lt;h3&gt;
  
  
  Explainability and AI-Ready Data: Building Trust Through the RAISE Framework
&lt;/h3&gt;

&lt;p&gt;And how does explainability come into play here? I know explainability is a very buzzed-up term, and of course in pharma, you really need to understand not just success but why success is happening. So what does explainability mean in pharma, and how important is it to you?&lt;/p&gt;

&lt;p&gt;So there is a fundamental thing on the domain of pharma because it's a really regulated domain, and the first step of it is AI-ready data. It starts by the data itself, and then it goes, yes, to the outcome. These outcomes need to be trusted, need to be secured, ethics, fair, eco-sustainable, and of course transparent and explainable.&lt;/p&gt;

&lt;p&gt;Here we have the explainability. That's what we have in Sanofi, what we call RAISE framework. It's basically responsible AI at Sanofi respecting the whole pillar adjustment, and yes, explainability is part of it. And what's the funny thing is that when we talk about Gen AI, there is a contradiction between explainability and Gen AI, right? It's inherently a black box, right?&lt;/p&gt;

&lt;p&gt;Yeah, exactly. So what kind of tools or processes do you use to extract explainability from this Gen AI black box? So we built what we call the AI Foundry, and basically every AI product is using the ecosystem that we have in this AI Foundry. Three pillars, of course AWS, because if you don't mention AWS, I think both of us will be out today. So we have AWS, and we have Dataiku. We have you guys for everything around the data, the ingestion, the explainability of it, and the bias that we can have on the data, but also on the monitoring of the ML and the foundation models that we are using. And we have Snowflake. So basically three providers to build the whole pillar that I mentioned on RAISE framework.&lt;/p&gt;

&lt;p&gt;So maybe split up those three providers. Explain what each one of them does. We have AWS for infrastructure, Snowflake for database, and then Dataiku. What do you use Dataiku for? So Dataiku is to make the link between the scientists, the data people, and the business users. So it's basically the front door for the whole scientist personas, manufacturing supply chain experts, commercial experts, R&amp;amp;D experts that don't care about if I'm using Gen AI or something else, but they need to have access to the data. They need to monitor the data, and they need to build an ML model and to use it without being experts on the domain.&lt;/p&gt;

&lt;p&gt;Got it. Part of what I understand in your process is this desire to have AI-ready data. You've said that phrase to me a couple times in previous conversations. Tell us more about what you need, what that means. What is AI-ready data? So AI-ready data, first, the data is a shared accountability between the business and the IT or digital team. Then the data is known. We know where it is and we can find it. Also, it's something about data being shareable because we are moving in a new era where I am not building my product for my business, but the value of my product is cross-functional, so it needs to be shareable with other products to make a decision and to be at the end available to everyone.&lt;/p&gt;

&lt;p&gt;And then we have something related to the data itself, which is the data is secured, is with high quality, without bias, and trusted and safe. That's what I mean with AI-ready data. Secure, high quality, without bias, trusted, safe makes a lot of sense. And to do that, you need, like what we're using, you guys, is to showcase that we have this lineage of the data end to end, and we can share outcomes between products and sharing the data of course with the control access control, but we can have this traceability and explainability of what we are doing and the why and how I got this outcome.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance as the Foundation: Managing Risk and Value in AI Implementation
&lt;/h3&gt;

&lt;p&gt;Makes sense, being able to show your work basically. So what do you think some of the biggest risks are in pharma when you're implementing these new AI capabilities? What do you really need to watch out for? So we have one objective that I mentioned at the beginning. We want to reduce the timeline from discovery to therapy, and we have a lot of products that help us to do that from R&amp;amp;D to commercial to employee experience. And to do that, we need to have a clear governance. It's not because, again, it's not about hype. AI can be used, yes, for several purposes, but we have one purpose. If the product that we want to build, that we want to deploy, is not helping this statement, it's out of the governance.&lt;/p&gt;

&lt;p&gt;So that's why we set a clear governance, and that's why we set one platform, which is our AI Foundry, around this ecosystem. And the AI, again, for us is opportunities, a technology that helps us to accelerate, to go faster with efficiency.&lt;/p&gt;

&lt;p&gt;Got it. Yeah, I think we saw early on in this generative AI phase the inclination to put something out there because you needed to have some kind of generative AI, right? Early on, the classic thing was like the HR chatbot, and now we're moving on to maybe things with a clearer ROI, perhaps a more obvious way of moving your very clear needle. Shortening that time to value makes a ton of sense.&lt;/p&gt;

&lt;p&gt;One thing that I hear often among clients is obviously around this governance concept. I guess a common governance idea is data governance, so basically who has access to what data, but then there's also a workflow governance, right? Who has approved of using this project, whether there's documentation, whether there's been a risk analysis. Do you see the need for both types of governance, or is there really one that's a lot more important?&lt;/p&gt;

&lt;p&gt;We have basically two governance. One is top level with Excom members. Basically all our AI, generative AI initiatives, they go through this governance. We have what we call front door. It's the first entry point where every employee in Sanofi has an idea about AI that can change the world. We told them, okay, prepare your pilot ID card. What does it mean? That they need to have a business sponsorship, a high one at Excom level, digital sponsorship, because we need to ensure that the value will be there, concrete value, and also that's in terms of feasibility, technical feasibility, it can be done.&lt;/p&gt;

&lt;p&gt;We ask them to have commitments, short-term commitments, three months. You need to showcase part of the value because somebody can tell you, look, we will have 40 million ROI. Cool, okay, so in a year? Yes, in a year, okay. I'm a little bit stupid. I will just do an easy calculation. In a year, let's come back to three months. I will give you a sandbox with the whole tools and show me the value in that three months. Of course we have accountability of Excom.&lt;/p&gt;

&lt;p&gt;Then people, they will say, oh, okay, I will not burn myself because I'm putting myself in front of Excom, in front of a sponsor, a high senior leader in digital. At least in that, we moved from, I can give you an example on commercial, we received 56 use cases and at the end we landed at two. They removed all the use cases when we asked them. That's the prerequisite before pushing your use case. That's this governance. It's needed because if not, just remember in 2010 when all advisors, they said if you invest one dollar in AI you will get four dollars. A year after that they lost four dollars for every one dollar invested, so we can't go without this governance on the value.&lt;/p&gt;

&lt;p&gt;Then you have the other governance which is related to data and AI, which is I would say more technical one and about all the concept of AI-ready data. We need to understand the data because there are some issues, there are some risks, we have a regulator and patient data, all these kind of things. We need to have this easy to digest data governance, what we call AI-ready data. It's basically the data governance.&lt;/p&gt;

&lt;p&gt;That makes a ton of sense. When we're talking about those regulations, with the pharmaceutical industry, you really have different regulations and different rules in every country you're going into. How do you manage all of that? That just seems like it's such a heavy burden. Do you have different teams or different ways of targeting, let's say the US regulations versus EU regulations, or is there a central governance team that manages all this?&lt;/p&gt;

&lt;p&gt;The governance is central, central governance. Basically it's two. The data team where I'm heading that, and we have the governance with the generative AI board with the Excom members, and yes it's at global level. Then you have sub teams for countries level because you need them. They are more experts on the domain. At the end, if you look to the whole regulation, there are a lot of commonalities and yes there are some specificities which you need to manage, but that's why we have this global central team for governance and you have local team but they're working with the global team. They are not reinventing the wheel, but they are specialized in governance in countries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Beyond the Hype: Focusing on Shared Objectives and Patient-Centered Outcomes
&lt;/h3&gt;

&lt;p&gt;Yeah, okay, that makes a lot of sense. I've been thinking a lot about how the names of workers or maybe the names of titles are going to change as this stuff evolves in organizations, and I'm starting to see a split where maybe 80% of workers are going to be agentic consumers or users, and then 15% are going to be designers of these agentic tools or components, and then you have 5% that's governance.&lt;/p&gt;

&lt;p&gt;Do you see that sort of distribution reflected inside of pharma organizations?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=1HGaUBD6CsA&amp;amp;t=960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa5zudk5yj0agma6jru7.jpg" alt="Thumbnail 960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's not all pharma that are the same. I can speak at least for Sanofi. We have a purpose which is to put medicine at the hand of patients in the world. For us, the patients are not US patients or French patients or German patients or Chinese patients, they're patients in the world. So we don't care about the hype. We are not following, and if you look,  not only Sanofi but all pharma, they were a little bit late to embrace the AI domain because of regulation, because of constraints, because of a lot of things. Because at the end, it's not only a quick win. You said that at the beginning, there is research of molecules long term, so you can't do it just because you are changing roles.&lt;/p&gt;

&lt;p&gt;Yes, I see that there are some teams saying, okay, I will be an agentic AI specialist, I am a Gen AI specialist. But what does it mean, Gen AI specialist or agentic AI specialist? If you are an AI expert, you manage Gen AI, you manage AI agents, so there is no expert in the domain. Honestly, we don't care about the title. We don't care about the name. For us, it's a technology that serves an objective, and we see if it makes sense to put it for some objectives or not. That's why the governance, again, is really a key.&lt;/p&gt;

&lt;p&gt;It's interesting that you keep coming back to having a single objective, having a goal or a set of decisions, a North Star, a target to lead towards. As we look at other industries, I won't ask you to be an expert on them, but do you think every industry or every company needs to have a single driving objective or should have a single driving objective when they're implementing AI?&lt;/p&gt;

&lt;p&gt;I think it's key. If the company doesn't have a shared objective, that means we'll have more and more silos, more and more local objectives. But in mathematics, we know that the sum of local objectives is less than the global objective. So if we want to achieve as a company an objective, it needs to be a shared one. That's why we put this governance on AI and Gen AI with the executive committee, because we needed accountability for all divisions.&lt;/p&gt;

&lt;p&gt;All companies have a starting point of their product and they have an end point. So at the end, all companies have a shared objective that they need to highlight. For us, it's the distance between treatment and discovery. A molecule starts in research, but the research themselves can't put it in the market. They need to develop it, they need to manufacture it, and then they need to commercialize it. So the workflow of the data and of the objective is the same, but with different ways of how I will contribute to that objective.&lt;/p&gt;

&lt;p&gt;We're touching the entire value chain. That makes sense. We need to be injecting AI inside that value chain only in the places where it actually makes sense and pushes towards that end goal. Right, perfect. So we're right about at time here. Any last words you'd like to say about explainability, about getting your data ready, and about rolling out AI inside the pharmaceutical industry?&lt;/p&gt;

&lt;p&gt;So there are two things. One, we don't need to follow hypes because every company has a context, has constraints, and has a history. We need to move forward with the whole heritage that we have. To go far, it's not about going fast, because alone we go fast, but together we go far. That's one thing. The other thing is biology in my domain. Not everything is explainable, but medicine needs to be explainable for patients. If I'm going to put it in my body, I want to know why it works.&lt;/p&gt;

&lt;p&gt;Exactly. Makes a lot of sense. Thank you so much, and I hope everybody out here learned something. Thank you.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Simplify permissions management across Amazon Redshift warehouses (ANT350)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:20:10 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-simplify-permissions-management-across-amazon-redshift-warehouses-ant350-1mll</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-simplify-permissions-management-across-amazon-redshift-warehouses-ant350-1mll</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Simplify permissions management across Amazon Redshift warehouses (ANT350)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Sandeep Adwankar introduces Amazon Redshift Federated Permissions, a new feature that simplifies permission management across multiple data warehouses. The solution allows permissions to be defined once in a producer warehouse and automatically enforced across all consuming warehouses through AWS Glue Data Catalog registration. It supports fine-grained access controls including data masking, row-level security, and column-level access using global identities via IAM roles or IAM Identity Center. A live demo demonstrates how a marketing warehouse can share data with a sales warehouse, applying masking and row-level policies without requiring configuration on the consumer side. The feature enables horizontal scalability and auto-mounting of databases, eliminating the need for manual data share creation. Available in Redshift patch version 1.97.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/H9SXN0d1z0U"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hay6m3wyu5zqpz889o2.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing Amazon Redshift Federated Permissions: Simplifying Multi-Warehouse Data Access
&lt;/h3&gt;

&lt;p&gt;Hello everyone. Welcome to the session, Lightning Talk ANT350, Simplifying Permission Management for Amazon Redshift Warehouses. This is a new launch, so we are launching Amazon Redshift Federated Permissions that will enable customers to simplify the permission management. I'm Sandeep Adwankar. I'm a Product Manager for AWS and really excited to talk about this new launch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=30" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr6m6tp2oukjc4dw97sl.jpg" alt="Thumbnail 30" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Redshift is Amazon's cloud data warehouse. It provides high scalability across petabyte-scale warehouses. It provides 2x price performance improvements compared to any other warehouse solutions in the market, and it provides access to data across multiple data sets for multiple use cases. What customers are also using is they're building multiple warehouses. As the data is increasing, they want to add additional warehouses, and that's where the multi-warehouse architecture becomes important.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5g7md90186ves2ipz27.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customers can now have data for different use cases, for example, reporting dashboards or exploratory analytics or streaming and batch, in different clusters or warehouses at the same time, keeping those workloads isolated and having the ability to manage those separately, have the cost attribution, and have a way to start those and manage those very quickly. What we wanted to do with this is to simplify this multi-warehouse architecture, and to do that, what we wanted to do was basically simplify the governance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=110" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88hx6or8uoghx3xx5kqh.jpg" alt="Thumbnail 110" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this launch, we thought about what if we can make all the warehouses' data accessible for any new warehouse that comes in. For example, you have a new warehouse in the blue, which is the marketing warehouse. You have the marketing data, but these marketing analysts want to access the data across all your other warehouses, right? So your warehouses could be your data science or streaming or others. How can that marketing analyst role start accessing the data without creating an additional set of permissions? What if the access is based on the existing permissions model?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79mq5fxctaizl0wslnjo.jpg" alt="Thumbnail 150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's say you have in your different warehouses you applied permissions for roles, for groups, for row level or masking level for the marketing analyst role. Those permissions should be enforced without doing anything, right? And then lastly, we want to ensure that the access is based on who you are rather than what you connect to, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=170" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5c6byo972evch6mjlaz.jpg" alt="Thumbnail 170" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As me, as a marketing analyst with my role, I should be able to access the data from any warehouse based on the permissions that are provided to that particular role. So with that in mind, we are really excited to announce Amazon Redshift Federated Permissions. It provides simplified permission management across your multiple warehouses. You define your permissions once where your data is, and it's enforced across all your warehouses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=200" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk9vx5tf44q2h55t5182.jpg" alt="Thumbnail 200" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can apply your existing set of Redshift permissions models, for example, fine-grained access models such as data masking, column level access, row level access, and those will be applied and enforced as you query in any different warehouse. This provides you improved user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=220" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iiu8rn0cuipfq36hfbu.jpg" alt="Thumbnail 220" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of creating data shares, you register your warehouse once into a global catalog, and now all these warehouses will be auto-mounted in all the clusters. And this provides the horizontal scalability, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=240" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c8pw96rbqxlb3qkn9ru.jpg" alt="Thumbnail 240" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the warehouses, you are now bringing the warehouse very quickly, but with this permissions model, you're also getting the data share and getting access across the different warehouses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=250" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4042m9mw1gx85jwdwvl.jpg" alt="Thumbnail 250" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we know that some of the customers already are using local permissions models for specific clusters, and for them this provides an incremental path where their existing permissions on their local users continue to work, but you can have the global identity-based new model working as well. So the high level architecture looks like this where you have multiple warehouses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=270" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sku0t107lz2bmuttopm.jpg" alt="Thumbnail 270" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These warehouses will be registered in the common catalog, which in AWS's case is AWS Glue Data Catalog. And the new thing is you can, on that Glue Data Catalog, apply Redshift Federated Permissions, and those permissions will be enforced in each of the warehouses as you make the queries. And you can apply different permission models, whether it's table level, coarse-grained, masked data, column, row level, or cell level. Any of these permission models will be applied, and the data is managed through Redshift Managed Storage based on these.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=310" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faew06qlkoehkwlsvj9hk.jpg" alt="Thumbnail 310" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Concepts: Producer Warehouses, Global Identity, and Two-Phase Authorization
&lt;/h3&gt;

&lt;p&gt;Based on these three concepts, let me explain a couple of key ideas here. If you have data that you want to share,  all you have to do is register with the AWS Glue Data Catalog. This is also called a producer warehouse. If you want to access the data, it's a consuming warehouse. In that case, you don't have to really do anything. The warehouses will be auto-mounted, and now you can start querying the data from those warehouses based on the permissions that you have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=340" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futd3ua8h1vybmnr50aqc.jpg" alt="Thumbnail 340" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second concept here is global identity, because we are talking about multiple warehouses.  The local identity, which is on a per-cluster basis, is something that we want to migrate customers from to global identity. The two global identity models that customers use are, first, the IAM role, and second, they may have their own Okta or some other identity provider, which they can bring those IDP roles into the IAM Identity Center and then apply the permissions on the IDC roles. Whether you use IAM role or IDC user, in both cases, it's a trusted identity propagation. So the identity as it moves follows the propagation through the trusted services path, and it's always logged and available for auditing purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=390" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6o6zjwv1kwea42ecrk8.jpg" alt="Thumbnail 390" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And one of the last concepts is authorization. It's a two-phase authorization. So you are making calls from warehouse one to warehouse N. Warehouse N, where your data is, is where your policy stays, and the policies are validated at the warehouse end. Then warehouse one, where you're actually making the query, is where the permissions are enforced. So if you have data masking, the columns will be masked at warehouse one, and all the coarse-grain and fine-grain access control permissions will apply, for example, row-level security or column-level or dynamic masking support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=430" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqiguts7z4evz0zyk4or.jpg" alt="Thumbnail 430" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So for admins, what do they have to do to register a warehouse?  All they have to do is use a one-click option to register their warehouse in the AWS Glue Data Catalog. Just by registering, it creates the federated catalog on the Glue catalog side. For example, here it shows there are multiple warehouses, and as they register, the corresponding federated catalog gets created. This federated catalog is auto-mounted on all the warehouses in your account. So you have the auto-mounting happening there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=460" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbsx3p311md08192f9z9.jpg" alt="Thumbnail 460" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, as the analyst  or anyone who makes a query, the authentication and the token information is propagated through the trusted identity path. So in this case, if you're making a query from warehouse one to warehouse N, it passes that token and authorization information through a trusted identity through the Glue catalog to the warehouse end where the verification happens. The policy is collected, the policy is sent back to warehouse one, where it's actually getting enforced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=500" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5c56xazj46moxm12suad.jpg" alt="Thumbnail 500" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Demonstration: Implementing Data Masking and Row-Level Security Across Warehouses
&lt;/h3&gt;

&lt;p&gt;All right, so let's look at a demo. This is a new launch, so  the demo will help in understanding. For this demo, think about there being two warehouses. There's a marketing warehouse and there's a sales warehouse. The sales warehouse is new. They wanted to get access to the marketing warehouse, so the marketing warehouse, all they have to do is start registering their warehouse with the Glue catalog. So it's a one-time thing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=520" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc17f54p5py7l5ulutw5.jpg" alt="Thumbnail 520" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=530" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41ld16mqxleylg37gjwd.jpg" alt="Thumbnail 530" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=540" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a04ebh2izcsdy7twemg.jpg" alt="Thumbnail 540" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the way it works is a simple option. You have the warehouse already created. You go into the actions, and there is  an option there to register with the Glue catalog. So you select that option, and if you just use the default, you can go and register it. There's another checkbox there where you  can check that box, and it will allow you to set up the IDC as the second identity model. So that's the checkbox over there if you have to do that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=560" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgnc1l5gp1eb8xn7t7bh.jpg" alt="Thumbnail 560" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So once it's registered, they can go and verify it. For example, here you can see that the  Glue catalog is registered and the permissions model is registered permissions model. So now it is already part of the catalog, and they can go and start adding the tables. So in this case, the marketing team wants to create a new table, let's say a credit card table, and they want to apply a masking policy because they don't want to expose the credit card number to the consumers or whoever is accessing the data. So how do they go about doing that? Let's look at the path of creating it. Let me walk through that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=600" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyia2wel9z39zy9t9jij.jpg" alt="Thumbnail 600" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So first, as part of  the explorer, we are showing the Query Editor V2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=610" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz6kajs0bqwby3g4xhl4.jpg" alt="Thumbnail 610" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=620" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5bn1nh0gfsgrunoo3j4.jpg" alt="Thumbnail 620" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The admin is creating the table. In this case, it's creating a simple table of credit card.  Then it's adding certain data which includes the credit card numbers in it. So now the data is populated. Now it's creating a masking policy where the credit card number is masked.  Now it's attaching the masking policy to the particular credit card table for a particular role. Let's say the analyst role. For admin, if they do a SQL query, they can see the credit card numbers. So for admin they have full access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=650" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxa2ks1rums7g486ps5d.jpg" alt="Thumbnail 650" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=670" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu6i6m9i4da74pjobjbg.jpg" alt="Thumbnail 670" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that they have created that masking policy, they decide to grant access, and granting the access is as simple as  they just want to grant select access to the analyst role. So now that completes the administrative part of the marketing admin, which is they have gone and created, registered the table with the AWS Glue Data Catalog,  created the table, added the data to it, created the policies, and shared the access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=700" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p5jauqd65yii8nl2p18.jpg" alt="Thumbnail 700" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=710" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu60wd9zlyj5vmdzwbm3.jpg" alt="Thumbnail 710" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=720" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ulx5mr6nqe0d7felpcn.jpg" alt="Thumbnail 720" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now what do I have to do on the sales warehouse side to get access to the data? The answer is sales admins don't have to do anything. Sales analyst will just go to their favorite editor, which is, for example, here they're using the Query Editor V2 and start querying it. So when they query the data, the query path, as I explained  before, goes from the warehouse one to warehouse and they see the masked data there. So here is the read-only analyst role, and they are able to see the  different auto-mounted databases here. And for this particular auto-mounted database,  they found this table that marketing just created, and for this table they're just going to query it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=740" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oabvions8bkhkt2bywr.jpg" alt="Thumbnail 740" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=750" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvuw404ngay35de7nowj.jpg" alt="Thumbnail 750" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this query that is being run on the sales warehouse is making calls to this table in the marketing warehouse. The policy stays with the marketing warehouse getting enforced in the sales warehouse.  So that is the path there. And as it is enforced, the masking policy is enforced. No masking setup needed on the sales side.  So then there is some governance changes and there's a new rule on these new compliance policies where the marketing admin has to apply additional policies. If the analyst is from a state of California, they should only get access to rows which are from the consumers in the state of California.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=790" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qwey92063zci84o1n29.jpg" alt="Thumbnail 790" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So to apply that across multiple warehouses, previously you would have to apply those policies across multiple warehouses. With this, they will have to do it only once. So in this case, to apply that policy, they will have to create the row level policy,  attach the row level policy, and enable the row level policy. So these are the three steps. So let's look at that. So the admin logs in. In this case it's creating the row level security policy where the state is California. It is attaching the row level security policy where it's providing the table name and the role, and then it's turning on the row level security policy. The row level security policy is now enabled on that particular table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=820" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ln7yyg0ssv66gckpnq.jpg" alt="Thumbnail 820" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=830" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qgaerzv4ibhe7nucdtk.jpg" alt="Thumbnail 830" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=840" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhhsglwptoohgv0kped1.jpg" alt="Thumbnail 840" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=850" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeocgkw6twx8qwu4cery.jpg" alt="Thumbnail 850" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=860" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v3mo9uz22oftzgn6lbt.jpg" alt="Thumbnail 860" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now that  the policy is set up, sales analyst just goes there and starts querying it. Now this particular table  has now two policies. It has a masking policy and it has row level policies. So the output that they will get is the combination of these two policies.  So they will continue to have masked results as you can see, but the rows are only for the state of California. So you have these multiple  policies that are applied on the producer side, on the marketing side. You don't need explicit data shares, but those are enforced on each of the warehouses. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=870" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f1s2nihcikgv8jadkys.jpg" alt="Thumbnail 870" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's recap this demo. So what we just saw was there's a simplified registration setup.  Marketing admin was able to just register with the AWS Glue Data Catalog as part of the registration option. If they want to create a cluster as part of cluster creation, the similar option applies. So it's the default option that gets a warehouse registered with the AWS Glue Data Catalog. The granular access control compliance, the policies that are needed, data masking and row level security for the credit card information, was applied by the marketing warehouse.&lt;/p&gt;

&lt;p&gt;These warehouses don't need to be configured on any of the consuming warehouses. Once the policies are applied on the producer warehouse, they just apply automatically. There was an improved user experience when the sales analyst went and accessed the data. He saw the auto-mounted warehouses, the tables were already accessible, and he could just query the data. When he queried the data, those policies were enforced so that he would only see the output which is compliant to the policies.&lt;/p&gt;

&lt;p&gt;Lastly is the horizontal scalability. You can spin up these clusters and warehouses in minutes, and now you don't even have to apply or create additional permissions and policies and shares and users and groups and all those things. That complexity goes away because now you have the ability to access it without additional work on the sales side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=980" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsthaeomgkqm57499ud7.jpg" alt="Thumbnail 980" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All right, so this is a new launch. It's currently getting rolled out as part of Redshift 1.97 patch version. So when you create a cluster and if you see the Redshift patch 1.97, you will have access to it and you can start using this capability.  We have a documentation page that provides end-to-end information about how you can go and apply all this set of permissions, how you can apply the IAM configurations. What I didn't show in the demonstration is the IDC path, and it's also very easy to set up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=H9SXN0d1z0U&amp;amp;t=1030" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhph92nd992snpysveeuu.jpg" alt="Thumbnail 1030" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We provided a wizard for you to walk through it, so this documentation provides an easy way for you to set up the producers, set up the consumer clusters, and get going on building your warehouses. So thank you for coming to this talk. This is a new feature. We'd like to advise you to start out and see how it works and provide feedback. There is a survey in the mobile app, so please provide the feedback of the session there.  Thank you.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - How to prep Telemetry data for AI consumption (DVT222)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:20:04 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-how-to-prep-telemetry-data-for-ai-consumption-dvt222-3574</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-how-to-prep-telemetry-data-for-ai-consumption-dvt222-3574</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - How to prep Telemetry data for AI consumption (DVT222)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Grepr's real-time machine learning solution is presented, demonstrating how to achieve 100% observability at 10% of the cost. The speaker explains how Grepr automatically identifies patterns in logs and traces, passing through high-signal data while reducing noise by over 90%. For traces, Grepr analyzes full trace structures rather than just endpoints, enabling better performance tracking. All raw data is stored in an observability data lake for long-term access and can be backfilled when needed. The solution takes 30 minutes to deploy and works automatically without impacting developer workflows.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/WWLRM61820A"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevcy1zq37dlgsbuqrl1s.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=30" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lmwrn3fndftqojpm1uf.jpg" alt="Thumbnail 30" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Grepr's Real-Time Machine Learning: Achieving Full Observability at 10% of the Cost
&lt;/h3&gt;

&lt;p&gt;Hey everybody. Thanks for sticking around. I think I might be one of the last few sessions that are happening, so thank you for stopping by. Today I'm going to talk a little bit about how Grepr's real-time machine learning can actually help you get 100% of the observability that you're seeing today at 10% of the cost. I'll start with a little bit of talking about the AI for telemetry problem because that's  something that people always face, and I'll talk about extracting signal from that data so you can actually feed it into AI, and I'll talk a bit about how Grepr works to get there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=40" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo535a8dft6acax83pku0.jpg" alt="Thumbnail 40" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ecdm7wmyxfiambn1eul.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=90" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf4z53d6awvuhoarr7b4.jpg" alt="Thumbnail 90" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if you think about how people have been working with observability for the past, I don't know, 15 to 20 years, it all started with full stack observability where you would collect the data from the agents, send it into an application, and that application, that aggregator, is a full stack aggregator, kind of a walled garden that defines what you can actually do with that data, not more. And then over time we started seeing  more openness in these observability platforms, more modularization, where OpenTelemetry came in as a protocol to separate the data collection from the data aggregators, and then we started seeing telemetry pipelines as well. And so what happens after that is we want to enable AI-powered ops and workflows, and this is what everybody's talking  about today. We want to empower SREs and DevOps to be able to handle enormously more complex operations and systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=100" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0bor3jrzw6wm9p13ntu.jpg" alt="Thumbnail 100" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But the biggest problem with AI systems today is that the data is just enormous and it's mostly noise, so everything that you're collecting, maybe you'll ever need maybe 1% of it. And if you were to feed all that data into an AI model and tell it, hey, figure out what's actually in there, it's really garbage in and garbage out. And so the biggest problem with using AI for observability is the ability to denoise the data and figure out how to concentrate it, so you can actually have clean data for these systems. And really that's what Grepr does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=140" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqknvtmnvht1p3wz13wap.jpg" alt="Thumbnail 140" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=160" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyry5yse2ka46eu3pu7pp.jpg" alt="Thumbnail 160" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the way Grepr works is you start, let's say, with your existing deployments. You're collecting data, logs, traces, metrics. That data is going into your observability vendor, you know, maybe Splunk, Datadog, Grafana, whatever it is, and Grepr sits in the middle.  We automatically figure out what is noisy, what's not noisy, what are all the patterns that are in your data, and we use that to figure out how much volume is actually passing through each of those patterns. And we can use that pattern to understand, okay, how do we give you full coverage of your application by passing through data for all of those patterns and making sure that we don't miss anything that might be useful. And so this is all automatic. It works out of the box. We automatically look at the data, we automatically figure out what are the patterns. We can operate on millions of patterns in the data. Today we're doing this for logs and traces, and we're building metrics next year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=210" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafjbs3dq9uj49r46cnka.jpg" alt="Thumbnail 210" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=230" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1jzmwm2yl62ix2z4srx.jpg" alt="Thumbnail 230" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=240" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvl510ewapswyp9gy0k3.jpg" alt="Thumbnail 240" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=250" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg46298nssu3dndcd7sc.jpg" alt="Thumbnail 250" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the way that Grepr works for logs is as the data's passing  through in real time, we're figuring out what are the patterns in those logs. And so here in this example, we're seeing that hey, there's two patterns in the data. There's these GETs and then there's these POSTs. And what we do is we pass through those initial few samples for each of those patterns, and once we have enough samples, then we say, hey,  we've seen enough of those log messages, let's actually start reducing them. And then at the end of a two-minute window, we'll send you a summary saying, hey, we've seen this much  of this pattern, we've seen that much of this pattern, and we can also aggregate data inside of those patterns so that we can say things like we've seen an average latency of this much or  this many bytes actually passed through.&lt;/p&gt;

&lt;p&gt;So this allows you to get exactly the data that you need, high signal data passed through downstream, whether to AI models or down to your observability vendor. And all of this is super configurable, so you can make it be one minute instead of two minutes. You can change and decide, hey, I don't want to aggregate this pattern, I want to pass it through. We do things like automatically figure out what are the logs that are powering your dashboards and alerts, and we can automatically add them into Grepr so that we don't modify or we don't change your workflows and impact them if you roll it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=290" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yzsedlisgd9cmwuadyr.jpg" alt="Thumbnail 290" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For traces, we do something similar. So if you're familiar with tail sampling, the way that it works is you look at the endpoints of the traces, like where they're hitting, where they're getting started, and then you start looking at the performance of each of those endpoints.&lt;/p&gt;

&lt;p&gt;But there's a problem here. What if that endpoint is sometimes cached and sometimes not cached? That means your data or your traces actually have different paths depending on whether the data is cached or not. In this example, we're seeing all of the traces starting at a circle, and we have two red ones. One has only two hops, and the other one has four hops or three hops. You want to actually look at the performance of these two paths differently, even though they start at the same endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=350" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncze02120lydqt1v0p2n.jpg" alt="Thumbnail 350" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=360" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagnpvn01cu1zjqn786o3.jpg" alt="Thumbnail 360" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we do is we actually look at the full structure of every trace  to map out your entire application and be able to understand all the things that we need to pass through and make sure that the end user is aware of.  Then we keep track of the performance of each of those different signatures of these full structures, which allows us to understand when this particular path is slow and when that particular path is fast. We can then drop the noisy data, the stuff that's actually unnecessary, and give you full sampling across your entire application so we can cover everything. But ultimately no data is ever lost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=390" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cff43vxoo2slc7em8jn.jpg" alt="Thumbnail 390" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we do is we put all the raw data into an observability data lake, which allows us to keep that data at low cost for a very long time. You can keep it as long as you want, and it can always be queried. If you ever want to go find a log message from six months ago, you can go and look at it. You don't have to do a hydration, but if you want to, you could go and backfill that data back into your observability vendor, or you could have it be triggered to be manually backfilled or automatically backfilled.&lt;/p&gt;

&lt;p&gt;You can hook up this backfill to a ticketing system, for example. If a customer opens a particular ticket, maybe you want to go and load all the logs that are relevant for that customer or all the traces that are relevant for that customer. Or maybe you have some anomaly or fraud detection system that you want to hook up so that when an analyst finds that there's an anomaly, all the logs are already there in your end system for them to go and troubleshoot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=460" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuakkx2yb0pberv8hlwgt.jpg" alt="Thumbnail 460" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results we've seen are very encouraging from our customers. We see over 90% reduction in many cases with very minimal impact to developer workflows. Grepr usually takes about 30 minutes to get started with. You just set it up, point your existing agents into Grepr, and Grepr automatically starts working to figure out what all the patterns are, do the compression, and send the data through.&lt;/p&gt;

&lt;p&gt;This changes that conversation with your developers or your SREs who are trying to figure out how to even get started. If you've got 100 teams and they're all using logs or traces, and you need to really cut down this spend but you're not really sure where to start, you might wonder whether to start looking at patterns one by one and adding drop filters and sampling rates for these patterns. What we do is we just set it up automatically for you.&lt;/p&gt;

&lt;p&gt;That changes the conversation from being a blank slate where you have to do something, learn this thing, and get certified in it, to a place where it's actually more about tuning. You set it up, it starts working, and you look at the data that's passing through. You can make decisions about whether this is good, whether this is enough, or whether you need more. You can do that as time moves on because ultimately there's no risk. The data's all in the data lake, so if you ever need something that isn't actually forwarded, you can always go back into the data lake to fetch it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=WWLRM61820A&amp;amp;t=570" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy9yqpfxrbbb0syt354n.jpg" alt="Thumbnail 570" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll always find everything there, but this makes it really easy to roll out Grepr with the assurance that your data is going to be there without impacting workflows and actually increasing MTTR. Thank you. This is a very quick talk because Grepr is actually very fast and easy to describe, but I'm happy to  take any questions since we've got about 10 minutes left.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Navigating the future: Solutions architecture in the age of AI (ARC203)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:10:48 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-navigating-the-future-solutions-architecture-in-the-age-of-ai-arc203-4ipp</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-navigating-the-future-solutions-architecture-in-the-age-of-ai-arc203-4ipp</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Navigating the future: Solutions architecture in the age of AI (ARC203)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, John Walker and Dina Hussein explore how Solutions Architects must evolve in the AI era, drawing parallels to Gary Kasparov's "Centaur Chess" where humans partnered with AI consistently outperformed both supercomputers and grandmasters alone. They discuss fundamental shifts: architecture advice now originates from LLMs as an "opening gambit," velocity has increased dramatically (turning months-long projects into hours), and commoditization enables focus on strategic rather than routine decisions. The speakers introduce four SA personas—Inventor, Entrepreneur, Composer, and Advocate—each with unique strengths for different situations. They emphasize architects must transition from reactive to predictive, from planning to simulation, from periodic reviews to constant sentinel monitoring, and become "tech debt liquidators." Practical strategies include querying architectures with AI, generating POC simulations, and automating compliance checks using tools like Amazon Q and Kiro.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/DAh1JHOe56w"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj1dvfq85qdlgc8rk464.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: The Evolving Role of Solutions Architects
&lt;/h3&gt;

&lt;p&gt;Hello everyone and welcome. So who here is at their first re:Invent? All right, first session of re:Invent? That's everybody's hands. All right, we're all awake. That's good. Hey, I'm John Walker. I'm a Principal Solutions Architect here at AWS. I've got 25 years of industry experience and I've got five years here at AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=40" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7zckw67era2hhc561lq.jpg" alt="Thumbnail 40" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And my name is Dina Hussein. I'm a Solutions Architect Manager with AWS, working with Greenfield customers. My team and I work with some of our most innovative customers, those who are building new businesses on AWS or those who are rethinking entire industries from the ground up.  Today, John and I will be talking about where the Solutions Architect role is heading to and how to succeed in this era.&lt;/p&gt;

&lt;p&gt;Let me walk you through what we're going to be covering today. First, we're starting with what's changing, and then we will move on to a new game with new set of rules. And then we'll talk about how to evolve our craft as Solutions Architects. And then we will move to strategies to get you started no matter where you are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0dgft9o179o6m6b01be.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Kasparov Lesson: Humans and AI as Centaur Teams
&lt;/h3&gt;

&lt;p&gt;So let me take you back before we get started. A little story about something that happened more than 25 years ago. This is Gary Kasparov sitting across from IBM's Deep Blue, a supercomputer that can do 200 million positions per second, right? 1997, New York City, Chess Grand Master, first time an expert, a human expert in the field was defeated by an AI. Who remembers this? I was a kid at the time. I remember this one. And so that was the scuttlebutt at the time. Oh, AI is going to take our jobs. It's going to replace us all. It's smarter than the smartest among us. And this really didn't happen.&lt;/p&gt;

&lt;p&gt;So game six, Kasparov resigns after just 19 moves, and he said he felt like the ground was shifting beneath his feet. But he didn't stop there. There's more to the story. After this, Gary developed a tournament style called Centaur Chess. So a centaur, it's a mythical beast. It's half man, half beast, right? And here he designed a chess tournament that would take supercomputers, it would take these Chess Grand Masters, and it would pit them against these centaur teams.&lt;/p&gt;

&lt;p&gt;And a centaur team was a laptop with an AI chess machine on it, not the supercomputer, but smart enough AI, 1997 to 2005 circa, and humans that would work with this chess AI. And what happened? It's no surprise. The human and the chess AI together, the centaur teams, consistently won, beating a supercomputer that can calculate that many positions per second and beating the Grand Masters. This is our hypothesis today. I don't want to leave you in suspense for 60 minutes. Can we play this new game with AI? Yes, we can.&lt;/p&gt;

&lt;p&gt;And I want to take this a little bit further because the game has really been changing. What we've seen in the last 18 months, it's not AI will be useful one day. It's useful right now. And some of the things that are changing, the primitives. We used to think in terms of servers and databases, containers and serverless, and now we're thinking about brand new things. Agents, did that exist? No. MCPs, did that exist? No. A2A, agent to agent communication, did that exist? No. But we have to go build with that. We have to design with it, right?&lt;/p&gt;

&lt;p&gt;So the tools have changed. Who's building with brand new tools right now? Not new takes on old tools, but brand new kinds of tools. And then the patterns are changing. How do you put something that is non-deterministic into your program and have it work today? Brand new kinds of patterns you've got to solve for. But then all of that together, there's new problems, problems that you can take off the shelf that have been on the shelf forever in your industry, with your company, with your internal users and customers, things that you were unable to do before this age. So that's what we're exploring today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=270" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mxwzkldgt7b3t7pdua1.jpg" alt="Thumbnail 270" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Value Do Solutions Architects Provide?
&lt;/h3&gt;

&lt;p&gt;But first, let's lay some ground rules. Dina, what is it that architects actually do for customers and businesses? Thank you so much, John. So the question is, what value do Solutions Architects provide? This is a simple question, right?  But to answer this question, the answer isn't simple at all. To answer this, you need to make many decisions. For instance, what data to store, how to process it, how to organize, how to secure it, and so many decisions, and even some of the decisions branch out into even more with some trade-offs.&lt;/p&gt;

&lt;p&gt;So cost versus performance, speed versus reliability, flexibility versus stability, and so on. So you go on from one decision to hundreds of decisions. And here's what psychology tells us about many choices, many decisions.&lt;/p&gt;

&lt;p&gt;So when humans face many choices, we freeze, or we make poor decisions, or we tend to pick the first available alternative and we miss better choices. Imagine standing at the supermarket at the cereal aisle with hundreds of brands. You just wanted breakfast and you are overwhelmed with so many boxes. So what do you do? You either pick the first familiar box, or you stand there paralyzed trying to analyze and optimize for a decision that shouldn't take that much time at all.&lt;/p&gt;

&lt;p&gt;So imagine that decision isn't about cereal. It's a multimillion dollar architecture which will be the foundation of your business. And here is why organizations need Solutions Architects, because we do three things. First, we explore the universe of choices. Second, we curate the most viable choices. And lastly, we help make a decision by challenging the status quo by asking questions like why, why not, what if, and what about.&lt;/p&gt;

&lt;p&gt;Now, some of you might be asking me, hey Dina, but can't AI do this right now? And in fact, yes, the answer is AI can actually give lots of architectural alternatives and it's getting really good at this. But here's what AI can't do. AI wouldn't understand your organizational politics and your organizational context. It doesn't know that your team tried microservices years ago and it was catastrophic, not because of tech, because of culture. It doesn't know which stakeholder's relationship is fragile. It doesn't know that your CEO is risk averse while your CTO is a risk taker.&lt;/p&gt;

&lt;p&gt;So AI gives you possibilities while Solutions Architects give you choices, and there's a world of difference between those two. Now let's move on to where the architectural advice comes from, John.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=420" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftygfkown01vouzvnlub4.jpg" alt="Thumbnail 420" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Opening Gambit: How AI Changes Where Architecture Advice Comes From
&lt;/h3&gt;

&lt;p&gt;Thank you, Dina. And I think it's about those questions, right?  Because you can put a question into an AI and it's really good at providing answers, maybe too confident sometimes, right? Hallucinations. But you have to keep asking questions. You're asking questions with a reason. It's because of your team and your organization, so let me tell you about a move.&lt;/p&gt;

&lt;p&gt;So there is a thing in chess called a gambit. It's that opening move where you take that first chess piece out and you put it in front, and you know that chess piece is going to get sacrificed. That's the opening gambit, right? We are experiencing the same thing in architecture. So it used to be that you'd sit down with a blank page and you would design things out. It wasn't really a blank page. You'd go to Stack Overflow, you'd Google it, you're not without help. But now what's happening? I have customers coming to me and starting with that opening move. They're giving us the first opening gambit. It's no longer us. The first move belongs to the LLM. It belongs to that agentic system that is pulling out that first pass at a design, and it belongs to the customer.&lt;/p&gt;

&lt;p&gt;So whenever we have this shift happening, it's a little bit challenging, but this is changing in our favor. And I'll tell you why. Here's what we've gained. We've gained the ability to explore maybe hundreds of options if you wanted to. So what we can do today is take this architecture goal, the thing we want to do with the architecture, and put it into an LLM and get not just one option. Get dozens of choices, hundreds if you want to, and begin to explore those in depth. How should you do it?&lt;/p&gt;

&lt;p&gt;So I can say, give me ten different architectural approaches to a real-time event processing system. It can give me those ten, but then I start asking questions. I trade them off with one another. What is going to be most important for me, for my organization, for the goals that I have, and for what I understand about the rest of the systems that it has to interface with? So I'll take Kiro, simulate a POC, and really the POC is to answer those questions. What's going to happen with this component when it interacts with the other? What happens for security? What happens for scale? Whenever you create a proof of concept, it should be answering a question you don't know the answer to.&lt;/p&gt;

&lt;p&gt;And so what's my role now? Am I just using AI to do the work? Am I sitting beside the sidelines and answering questions that customers bring to me, and they already have the first pass at the architecture? No, you're doing something very important. This is not something that's happening to you. You ask the LLM for options. You run the simulations, you generate the POCs, and you exercise the judgment because it's your judgment at the end of the day that counts. And understanding the difference in those architectures that were offered as options and turning them into choices, right?&lt;/p&gt;

&lt;p&gt;So what we have at the first pass is you have to become the warden of thought. You are making decisions and judgment at scale. And this is the first thing that we have to choose as a strategy in this game, is losing the opening gambit and winning with knowing a lot of the other moves that we can actually decide from among each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  Velocity and Commoditization: From Months to Hours
&lt;/h3&gt;

&lt;p&gt;But next, we also have a velocity of change that is increasing. Because we can go grab all of those capabilities, the speed is increasing. So Dina, take us through the speed that's changing with architects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=620" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbkau3oj4g7gciqdhf2n.jpg" alt="Thumbnail 620" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you so much, John. And the first shift  is about where architecture advice comes from. The second shift is equally as important. Think about this example. If customers would have asked us to include intelligent search capabilities to an application years ago, this used to be a few months job, right? So first, what would you do? First, you would start by building the architecture design, review architecture alternatives. You would build a custom machine learning model, and you would integrate everything. So a few months. Now it's a Tuesday afternoon job, right? So you need to spin up Amazon Bedrock with Knowledge Bases, you configure RAG, you point it to your documents, and your job is done.&lt;/p&gt;

&lt;p&gt;So from a few months to a few hours, this is not an incremental improvement. This is a fundamental change in velocity. And this is what's happening. When things move that fast, parts of the architecture are becoming commoditized. And by commoditized, I mean architectural decisions are easier now to make and to reverse with less cost and less time. So part of the architecture is becoming commoditized. And what does it mean to us? It means that the commodity decisions, those decisions with a clear right answer, those decisions get super fast and they get perhaps entirely automated.&lt;/p&gt;

&lt;p&gt;But the strategic decisions, the ones that are relying on your organizational politics, organizational context, those are still there. And they still need a Solutions Architect more than ever before. Because here is what happens. When velocity increases, organizations tend to do ten things instead of one or two. And those things are not the easy things. They are the hardest, the things that require more innovation. So commoditization is happening, velocity is increasing. But this doesn't diminish our role as Solutions Architects. This focuses it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=760" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmswzczm6ovk9rw671s1m.jpg" alt="Thumbnail 760" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to focus less on commodity decisions. Think about it. Those decisions are getting more like primitives for us to use. We need to focus more on strategic decisions with no clear answer yet. And that's the first shift. Moving on, we speak about the second. &lt;/p&gt;

&lt;h3&gt;
  
  
  Addressing the Impossible: New Building Blocks and Problem Classes
&lt;/h3&gt;

&lt;p&gt;All right, and so we are going to begin to address a new class of problems on top of this velocity. Because that velocity did increase, and you've got more difficult decisions landing on your desk because of the commoditization of the architecture. So what Dina and I have done is we've really kind of felt that same ground shifting beneath our feet whenever we consider this talk. And so we got into making a discussion about this and thinking through what happens three months, six months, eighteen months down the line when you begin to apply these strategies as an architect. You end up having this as a logical conclusion.&lt;/p&gt;

&lt;p&gt;Because you are now addressing more strategic problems, what you begin to be able to do is to address the impossible problems. And not that's impossible right now, but things that were impossible twenty-four, eighteen months ago. Why was it impossible? Because the technology literally didn't exist. It's just been created this year and last year. So think about something that is non-deterministic in nature and dropping it into your architecture. Something that can actually explain, understand, and take action on human language. Those unstructured data that you had to pull out metadata about and then do your best guesses on before, now you can actually understand what's in that document and make real decisions.&lt;/p&gt;

&lt;p&gt;So whenever you have, let me give an example first. Whenever you have something like a marketing focus program where you're going to go out and target customers by sitting down a group of folks and asking them questions and getting their feedback, this used to be a really tedious process, still kind of is sometimes. So you get a market focus group, and you're going to ask individuals what they want and what they need. You're going to have to get folks from different market sub-segments and go reach out and develop those groups and go back and forth with your go-to-market strategy, with your marketing campaigns.&lt;/p&gt;

&lt;p&gt;What if you can then create pseudo personas in an AI and be able to ask those pseudo personas how this marketing campaign is going to shake out, instead of taking that precious time of human beings and going after that for just one to ten different campaign ideas and getting the feedback of the folks in the focus group? What if you had your AI personas? What if you took those AI personas and used those to get the initial winnowing list? And then you go to the focus group with something that's much more likely to work for your campaigns. Maybe then you're able to get to more focus groups with more diverse sets of customers. Maybe you're able to go to market faster. Maybe you're able to understand your customer better.&lt;/p&gt;

&lt;p&gt;So understanding things that have a lot of context, heavy logic, or being able to put in something that can work in persona, like in the mode that a human being would normally operate in, that's brand new. This is something we weren't able to do before.&lt;/p&gt;

&lt;p&gt;So increase the scope of your research, your insight, your critiques, whether this is data that's coming back from support talks that you're having with your customers. Maybe it's chats you're having with your customers in a sales situation, but all of that information is now rich enough and available enough to actually take action on it.&lt;/p&gt;

&lt;p&gt;Second, I'd say you've got your new building blocks. We talked about this a little bit. You're going to have to take that intelligence and determine where it goes in your system. Let me explain how to identify exactly the kinds of pieces in your architectures that are going to be prone to this sort of change.&lt;/p&gt;

&lt;p&gt;An architecture is made of many components. What you want to recognize are those components that have a lot of logic that are baked in, that are a constant source of churn. New rules come in from regulations, new requirements come in from your customers, and you're changing something that is that high complexity piece of code. It's called McCabe Cyclomatic Complexity, and you can actually use your IDE to study your code and find out where this is.&lt;/p&gt;

&lt;p&gt;But it's the thing with a whole bunch of if blocks. You know it, you've got this in your code. You may have created it in the past, and I secretly have shame for those pieces of code that I've created myself. But those things where you have a business rule engine, perhaps today, some of those pieces that are trying to understand but don't actually understand, that's where you want to go use some of those brand new building blocks.&lt;/p&gt;

&lt;p&gt;And a third kind to recognize, whenever you have a system that is trying to do something, but you're telling it how to do it, maybe you've got five different paths, ten different paths in order to achieve that logic, what you're doing is you're telling it how to do something, but there's a lot of different ways it could do it. Instead, think of using an agentic system where you can use an agent and give it a goal and make it autonomous.&lt;/p&gt;

&lt;p&gt;Don't tell it how to do it, tell it what to do, give it a goal and step back and see how it's going to perform. Then make another agent that's going to watch that agent and make sure that its quality stays high. So the second agent to watch its quality is extremely important. So context heavy logic, the new building blocks, and use those goal oriented systems. Think about those things when you're looking at your architecture for the kinds of components that should be changed with these AI based systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1060" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk7vck4wbg2xwk2hj4lq.jpg" alt="Thumbnail 1060" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Living in the First Derivative: Architects Thrive in Change
&lt;/h3&gt;

&lt;p&gt;All right, Dina, we've talked about a lot of things and it feels very overwhelming. I know I felt overwhelmed and I know that a lot of our peers have felt overwhelmed. So what do we do to process this?&lt;/p&gt;

&lt;p&gt;Let me make a pause here for a moment. We've thrown a lot at you. So commoditization is happening, velocity is increasing, architecture advice coming from AI, and lots of those things makes you perhaps feel unsettled, like the role that you've signed up for is changing to something that you're not sure about.&lt;/p&gt;

&lt;p&gt;Here is something that a very wise architect taught us. Gregor Hohpe, if you know his work from the Enterprise Integration Patterns, "The Architect Elevator," he said something that changed how we perceive our role as Solutions Architect. He said that architects live in the first derivative.&lt;/p&gt;

&lt;p&gt;Now, if you remember calculus, and I'm sure some of you are trying not to remember calculus, the first derivative measures the rate of change, not the position. It's not where we are, it's how fast we're changing. And Gregor's point is that Solutions Architects don't live where things are stable and unchanged. We live where transformation is happening.&lt;/p&gt;

&lt;p&gt;For instance, organizations don't call Solutions Architects when systems are humming along in peace and everything is stable. They call us when there is a migration to the cloud, when there is an AI adoption project, when there is a novel proof of concept that we need to build, when we need to figure out value out of Gen AI, for instance, and so on and so forth.&lt;/p&gt;

&lt;p&gt;So architects live where there is change. We are needed and this is our natural habitat. This is where we have always lived. And by saying that, where there is change, we're needed, we're not trying to be nice. This is not a nice sentiment. This is naturally what the Solutions Architect role is about and this is naturally who we are as Solutions Architects.&lt;/p&gt;

&lt;p&gt;John, let's speak about strategies, how to execute in this environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1190" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4tehz59uf9j221i6d0o.jpg" alt="Thumbnail 1190" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Working Backwards: Five Customer Questions and AI-Enhanced Iteration
&lt;/h3&gt;

&lt;p&gt;So I love that talk from Gregor Hohpe. If you haven't read "The Architect Elevator," it's an absolute treat and I think it's extremely pertinent  to the time we're going through right now. And that is actually how I assess what my customers need is looking at their change, and whether I'm needed or not at that time. Maybe they're steady state, things are in production, they're humming along, they're not making changes. But where there is change, you're needed.&lt;/p&gt;

&lt;p&gt;How do we actually do the change though? This is the question.&lt;/p&gt;

&lt;p&gt;Who's heard of working backwards from Amazon? Okay, a couple of us. So I want to talk through five customer questions we apply to every problem that we do. This is actually how Amazon has operated since we were a bookstore, since Jeff Bezos's days in the late nineties. So going through this, we're going to ask five questions. They're not five business questions, they're not five technical questions, they're five customer questions. Every project I've been a part of that hasn't worked right has fundamentally missed one of these five questions. I didn't know it at the time, but now Amazon kind of puts it really clear. This has been used at AWS as well. It's been used the entire time the company's been around. Let me walk you through it.&lt;/p&gt;

&lt;p&gt;So first, question one: who is your customer? You do have a customer for every piece of software, technology, or system that you create. That customer is likely someone internal or external. But there's a person, it's a someone, it's not an organization. There's someone who's going to say yes or no. That's your customer, right? Know your customer, know things about them, know what they want, know what's going to actually help them out in their day, which comes to defining the problem. What's actually the problem they're facing? Go define the problem and set down a problem statement. You did this when you did science fair in school, right? You're going to set down that problem statement, and then you're going to have a hypothesis.&lt;/p&gt;

&lt;p&gt;So this is really just the scientific method when you boil it down. You go invent that solution. But it's never the first solution, and it may not be what they say they want. It may be what they need and they don't yet know that. That's a possibility. So go break that down a little bit and really see if your problem statement matches what your solution is. Then once you invent that solution, go refine the solution because no solution is perfect on the first pass. Go try it out and see what's going to happen with that solution and test it, iterate, learn what's going on with that solution, and actually define those things before you get started.&lt;/p&gt;

&lt;p&gt;Answer those five customer questions. How are you going to measure it? How are you going to test and iterate? What are the metrics that you're looking for? What kinds of customer stories are you going to pull out in order to refine this and go back through these five customer questions? But we don't stop here. This is as an architect what I do. Then I go break down the architecture, I decompose that into different components, and then I put those components back together. It's what I've always done.&lt;/p&gt;

&lt;p&gt;With AI, we're going to be able to understand more about our customers. Remember that unstructured data we're able to go ask questions about? Different kinds of data sources, we're going to be able to get this information to listen to them more. We'll be able to define the problem better by iterating on that problem statement against all that information we understand about the customers. Throw this against an agentic system. Maybe use Amazon Quick Suite to go after this. This is a new technology where you can actually go in with a chat system and tie this to your data and go back and forth on trying to refine that problem.&lt;/p&gt;

&lt;p&gt;Next, invent the solution. Here I'd go with an agentic IDE. So grab an agentic IDE. I use Kiro as my daily driver. It's Amazon Kiro. It's great, it's an agentic system where you can actually point it at your problem statement, all your definitions inside of a repository, and actually go invent, iterate on this piece. This is where you're going to develop those proofs of concepts and things like that. Fourth, refine that solution. As you learn from those proofs of concepts, iterate, refine it, go ahead and make those changes. Make them rapidly and get them out to production quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1430" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo9t68ud5kbk9jq0abd2.jpg" alt="Thumbnail 1430" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with all that data that's coming back, test and iterate on it and don't just sit on it. Make sure that whenever that data comes back in, you're paying attention to everything. You can pay attention to nearly all of the signals, that telemetry from production, the telemetry from test and dev. Pull it all back in, really understand the system.  Okay, but now we're going to move into understanding shifts. So these are fundamental shifts that are changing how we behave. How can we move from behavior A to behavior B? The first one we want to talk about is our former reactive nature and what's going to happen in the future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1480" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko2jk7gzyiskb8e2artl.jpg" alt="Thumbnail 1480" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Four Critical Shifts: From Reactive to Predictive, Planning to Simulation
&lt;/h3&gt;

&lt;p&gt;So here's the first shift, and I will start it with a question. Can we shift from reactive to predictive solutions architect? Before getting there, let me paint a picture. Think about migratory birds. Migratory birds, they don't wait for winter to come or for food to be scarce. What happens is that they sense micro trends in wind patterns, in temperature before deciding to migrate. They notice this even before the rest of the entire ecosystem notices this.  So their main strength lies in anticipation and not speed.&lt;/p&gt;

&lt;p&gt;And back to the reactive or predictive architects, reactive architects, and I used to be this architect, they get called in when there is a problem. Systems down, cost spikes, systems are having issues, architecture is getting into a snag.&lt;/p&gt;

&lt;p&gt;And then you get called in, you do everything, you troubleshoot, you fix, you make everything go back to normal, and then you're the hero who saved the day. This feels good because we feel wanted, we feel needed, we feel that we've delivered, we've saved the day. But here is how a reactive architect looks like from 30,000 feet away. We're always behind. Why? Because we're learning from failures, from mistakes instead of preventing them.&lt;/p&gt;

&lt;p&gt;Now the predictive architect, that's a totally new game. Predictive architects would analyze patterns and relationships between patterns, and then they would use tools like Kiro to query, here is my architecture, based on patterns that you've seen before, where are the risks to take care of? How can I be ready for the next milestone, for instance, when I hit 10,000 users, when we hit another milestone in our project, how to make sure we are avoiding risks? So they're always using AI. And that's actually possible now to become a predictive architect, thanks to AI, because AI can help you analyze those patterns. It can help you actually detect also some problems, mistakes, failures, or technical debt accumulating before this becomes an incident.&lt;/p&gt;

&lt;p&gt;So reactive to predictive, that's possible because of AI. And think about it, instead of firefighting, you're fireproofing buildings. So this is the first shift and the second shift, John.&lt;/p&gt;

&lt;p&gt;So it does come back to those questions, Dina. If you paid attention to what Dina was just saying, it is those questions to be able to become part of that operational excellence in your team while being an architect at the same time. The time of the ivory tower solution architect is gone. You're going to have to build and you're going to have to get hands on. Let me tell you a little bit about moving from tech planning to tech simulation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1630" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfq4penuaoosor4zuqye.jpg" alt="Thumbnail 1630" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You're not an architect sitting at a drafting desk and developing de novo, coming from a blank sheet of paper and developing that architecture anymore. You're going to have to go a lot further and a lot faster.  I've got a confession to make, we're all friends here. I've come up with a cost estimation that has been egregiously wrong in the past. Who else here has made some assumptions that have had the wrong cost estimation by the time it got into production? And the hands that are down, I know you have too. I see you all, I feel you. This is a safe space.&lt;/p&gt;

&lt;p&gt;But those things were based on assumptions. It's not because you were doing a bad job as an architect, it's just because you didn't know. Why didn't you know? Because you weren't able to run the simulation to actually see how this was going to compare at scale, right? So there's a lot of these things that you can actually go do at architecture time. I talk about different times in your SDLC, your AI DLC, and if you do get a chance, go listen to the AI DLC talk later this week. Fabulous.&lt;/p&gt;

&lt;p&gt;So the AI DLC, you're actually going to have at architecture time, at design time, at development time, at dev and test time, and run time. There's different times you can make different decisions, and a lot of those decisions come forward into architecting time. You can now go, instead of planning that architecture, writing the documents and estimating the cost and coming up with something that's wrong, you can actually go down the path of simulating the architecture before a single line of code is written.&lt;/p&gt;

&lt;p&gt;Those two components that you're going to make work together that have never worked together before, develop a POC that's going to test out their interactions, go deploy it in your dev environment and just go have that run at scale. What happens to that when it goes from a thousand users to 10,000, and then plot that linear scale, understand it, bring that data back, and instead of having a long assumption section in your architecture description document, put down the appendix, that's the result of those POCs and all of the things that happen.&lt;/p&gt;

&lt;p&gt;And this isn't just cost, this is everything in the well-architected framework. This is security, this is reliability, this is operational excellence. Make sure that everything that you're doing is actually tested, that you have the data and you're bringing it back and you're baking it into your design and your development phase. So this is what's going to happen. We're going to be able to simulate that and really run those POCs, run it at the time that you're doing the architecture design, run it at the time that you're choosing the architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1790" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85ilgzq3ndcfzg58cg93.jpg" alt="Thumbnail 1790" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is going to help you go from options to choices because it's going to winnow down that list from the 10 or 15 that you chose to run POCs in down to five. This is actually where the rubber meets the road, pardon the pun. I got the car on the screen. This is where the rubber meets the road with your architecture decision-making process in order to become that warden of thought. So two more shifts, here's the third one.  How do we move to just paying attention to architecture review time? That's a point in time, right Dina?&lt;/p&gt;

&lt;h3&gt;
  
  
  Constant Sentinel and Tech Debt Liquidation
&lt;/h3&gt;

&lt;p&gt;Yes, absolutely. So this shift from architecture review to constant sentinel is regarding how we maintain architecture quality over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1830" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrl3vqhmmzxq90vfl4ub.jpg" alt="Thumbnail 1830" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third shift. Now the architects here in the room will understand what I'm talking about when I'm asking how did we do architecture governance in the past? Actually, it's still happening now. We used to do quarterly architecture reviews, monthly meetings or quarterly meetings where all teams would come together, and we'd see work, we'd then approve  or we'd send back some work for change. In between those months, in between those days between the architecture reviews, code gets written.&lt;/p&gt;

&lt;p&gt;Change happens. Change happens when we're not there in the room, right? It's as if we're working as checkpoint inspectors when the convoy is miles away. So instead of that, think about becoming the constant sentinel who's always guarding, always protecting. How does this look like in practice? Think about it. Imagine every pull request gets evaluated against the principles, the architectural principles which the architect set up. Imagine that every change gets evaluated, or every service deployment gets checked against the Well-Architected guidelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=1910" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjme2c895fm36hquoo5j.jpg" alt="Thumbnail 1910" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is possible because of AI again. You cannot be in every pull request, you cannot be in every team standup, but AI can. Once configured against your architectural principles, the architectural patterns that a Solutions Architect has approved, and the anti-patterns to flag, it's there. It's reviewing and making sure that every change is in line with your architecture principles and guidelines. So this is the third shift  from constant architecture review to a sentinel who's always watching and always guarding.&lt;/p&gt;

&lt;p&gt;Taking the noise and turning it into signal is what you need to do. Tie it into your CloudWatch dashboards. Tie it into all of the things that you should be paying attention to as an architect. You can't read every log, right? So next we're going to go to tech debt. Who here's heard of tech debt before? Who here has tech debt? All right. So tech debt weighs you down, right?&lt;/p&gt;

&lt;p&gt;Tech debt was supposed to be a good thing, like a mortgage. You're going to take out a loan and you're going to pay yourself and you're going to ship things to production faster. But that's not what happened with tech debt. What happens with tech debt is these are the support requests, these issues that keep coming in and keep burdening your team. This is that system that is in the corner, in that yellowing workstation box that everyone's afraid to touch. And the last person who knew that code has moved on, right? They've taken that 401k and they're writing you stories about their vacations in Mexico. That's what tech debt is today.&lt;/p&gt;

&lt;p&gt;You've got to take that old Java framework, that old .NET Full Framework and be able to migrate that to be able to turn it into something else. I've often likened this to canyon jumping. You're on a motorbike and you're jumping a canyon, and you're not going to make it to the other side. Something changed though. We're able to now take AI systems and point them to old code bases and convert them into new code bases. You don't have to sit with the old albatross around your neck.&lt;/p&gt;

&lt;p&gt;Go take your tech debt and apply it to a single component. Take Amazon Q, take an agentic IDE, point it at a component and see what it does to update that component in your architecture. Update a library at a time. And then there are some tools that we have. We have AWS Q Transform, right? So you can take that and go transform some of your technology from the old .NET Full Framework into a modern .NET Core. And then maybe you're able to take it off of a more expensive operating system and put it onto Linux.&lt;/p&gt;

&lt;p&gt;Then you're able to scale a little bit faster, a little bit better. And so you don't have to have those security bugs that are going to nag you because you fear them in those old frameworks that are no longer maintained. So turn that revolving credit, that old debt into something where it's no longer the monkey on your back. Get rid of it, all right. So this is something that I think that all of us can do. Take that time and turn it into something strategic. Go take one of those problems that were impossible to solve before and begin to solve for them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2070" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1byffonyqqaa1p1yytj.jpg" alt="Thumbnail 2070" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Centaur Moment: Amplifying Architect Superpowers with AI
&lt;/h3&gt;

&lt;p&gt;So this is where we're at with that collective wisdom of AIs  and Solutions Architects together. This is our centaur moment. This is where we adopt a new strategy and what we're doing with our role. And I'll tell you, it's not just the architect role. You're going to have to do this first for yourself and you're going to have to take this back to your organizations and teach everyone how to do this. Because if someone is a knowledge worker, they need to become a centaur and you'll help them get there first.&lt;/p&gt;

&lt;p&gt;So what you're going to do is to take that machine learning superpower, that AI superpower. And it's not just about automation, it's about amplifying yourself. So prediction, constantly look for patterns in customer behavior.&lt;/p&gt;

&lt;p&gt;In simulation, go try things out before it gets into production, before it gets into a dev test pre-production environment where it's more expensive. The further and closer it gets to production, the more expensive it is to change. The cheapest change happens on the architect's desk. Be the agent of change and make that change happen.&lt;/p&gt;

&lt;p&gt;Don't pay attention just at architecture review time or whenever something gets to your test because the buck stops here on the architect's desk. This is the last moment of change. Instead, become a constant sentinel. Pay attention to all the CloudWatch dashboards, all of the things that are happening in production, all of the code that's going into your version control system, and then be a tech debt liquidator.&lt;/p&gt;

&lt;p&gt;But we're not done. What we want to do is talk about different architect personas. We are all different. We're all going to do things in a different way. So let's talk about different architecture profiles of architects and how they're actually going to address this in a new way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Four Architect Personas: Inventor, Entrepreneur, Composer, and Advocate
&lt;/h3&gt;

&lt;p&gt;Thank you so much, John. So the different personas comes from the notion that different situations require a different kind of architect, and it requires different kinds of strength. So every architect could be wearing this hat as a sole hat or could be changing between hats depending on the situation. Now when we're speaking about personas, you may find yourself in one of those personas, or perhaps different personas, different pieces of each persona.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2190" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0wngkvza2xpdjgo3wxl.jpg" alt="Thumbnail 2190" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first persona is an Inventor SA.  You know this person, right? Perhaps you are this person. An Inventor SA has always got a side project. They are always diving deep, and they like technical capabilities. So think about it. Your colleague, or perhaps yourself, who tries beta versions of software. For instance, if there is a new software released or a service release, they are the first to have spun it up before the keynote is over. So they are the technical people who like to explore what AI can provide. What are the possibilities that AI can provide?&lt;/p&gt;

&lt;p&gt;Now what is the main strength of an Inventor SA? It's actually exploration. They like to explore what AI can provide and applying the centaur method. So having a partnership with AI, they can be actually amplified. AI can help an inventor explore more, simulate more situations, and amplify their strengths even more.&lt;/p&gt;

&lt;p&gt;When do we need an Inventor Solutions Architect? When there is no validity, when there's a new proof of concept to build. When there is a new technology that we need to test out, when we need to understand what would be the return on investment on this generative AI proof of concept, for instance. Now everyone has their challenging point. What are the main challenges for an Inventor SA? Sometimes they like to explore, sometimes they are in the lab for so long that they forget to sail the ship. So sometimes they need someone to remind them, hey, are we just exploring or are we delivering the value? So this is the Inventor SA, moving on to my favorite SA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2310" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgd3d9qw2bq0t3nq93t0.jpg" alt="Thumbnail 2310" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright, so who here feels like they're the inventor at this point? You love to tinker, you get in the lab, and sometimes you just get lost in the subject. Alright, so next is the Entrepreneur SA. Maybe this is the SA that's the pal of the inventor that's going to help them actually say,  hey, we can ship right now.&lt;/p&gt;

&lt;p&gt;So the Entrepreneur, they're going to make bold bets. In every game strategy, there are going to be some people who are aggressive players on the board, and the Entrepreneur SA is that. They're going to find something new, interesting. It's going to light up the back of their mind, and they're going to be the ones that say at 80%, we're ready. This is the one, let's go. They're going to make the bold bets and move.&lt;/p&gt;

&lt;p&gt;You know this architect. They'll say, I've seen enough. And so they don't just experiment, they commit, right? So this is the person who's going to take something, and they're going to just evangelize it to everyone. And so critically, they know how to scale what works. They know how to get it into production, they know how to build, and they know how to ship, and they do it often. And they get excited about something, you're going to get everyone else excited about it. I feel like this architect from time to time.&lt;/p&gt;

&lt;p&gt;So what is the Entrepreneur's centaur strength? They're going to be able to scale through velocity. So they're going to be able to use AI to actually explore more of that AI space and actually find out the thing that is the idea. They'll have that aha moment and they'll go. And so when do you need that Entrepreneur? You need them whenever time is of the essence. Whenever your market window's closing, when you see your competitors doing some of the things that you wish you had done, that you had that idea, that you wrote it down and you didn't actually go for it yet, that's when you need the SA to behave in this way.&lt;/p&gt;

&lt;p&gt;So what's the Entrepreneur's challenge? Sometimes they move too fast, right? They will get up and run, and they are three, five moves ahead of everyone else in the organization.&lt;/p&gt;

&lt;p&gt;They will be going right up to shipping, and they need to get everyone else in line with them. Because in order to support a system, you need that whole team. You need everyone in the village around you. So that entrepreneur really needs someone to be their buddy, to be able to say, "Hey, let's bring everyone along. Let's go teach everyone what you've just learned. Let's go make that operational team really sing whenever they ship this thing into production. And let's go bring everyone along for the journey." So that's the entrepreneur. They're playing offense. These are the people who are going to accelerate you through this next set of changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2460" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsssfkysrxtdfrf26ua7.jpg" alt="Thumbnail 2460" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, show of hands. Who feels like the entrepreneur SA at times? You've got something that you're just so excited about, and you want to be able to bring that through to folks. Okay, next we have the composer SA, really bringing the synergy to everything. Definitely. So while the inventor SA is the explorer,  the entrepreneur is someone who likes to charge ahead and just implement things, execute things. The composer SA is different. They see patterns in chaos. So while the inventor SA would dream about the possibilities, what can AI provide, and they like exploration, they are always in the lab, while the entrepreneur would charge ahead and just execute and implement, the composer SA would take a step back and ask, "How can this architecture work together?" So they like to make sure that every component, every architectural component, is actually working very well with the others, so there is harmony and there is order in the architecture.&lt;/p&gt;

&lt;p&gt;What are the main strengths of the composer SA? It's actually this structure. So the harmony that they bring helps them actually be structured in the structure that they bring. And as a centaur solutions architect, the composer SA uses AI a lot because it helps them simulate, it helps them imagine, it helps them visualize faster. So it helps them bring this architecture, this structure, to their minds faster. When do you need a composer SA? When systems are complex, when architectures are sprawling, when there are some mobile services that you need to actually put in order, and you need to create this structure and this harmony in your architecture.&lt;/p&gt;

&lt;p&gt;What are the composer's main challenges? They tend to over-engineer sometimes, and they need someone to remind them, "Are we really solving for the business, or are we over-engineering things that should be simple?" And then we move on to the final persona. All right, who feels like they've really got that systems thinking? They're that composer architect at this point? All right, a couple of us. So let's look at the advocate SA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2590" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l6c3g2xz8c73z2ai6ze.jpg" alt="Thumbnail 2590" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have so enjoyed my time working with advocate SAs. I tend not to be this person from time to time,  but I enjoy my time whenever I get to share ideas with this person. The advocate SA is really that person who is going to understand the people that are impacted by the architecture. A little analogy. So whenever you have someone who is the architect of a house, you have someone who is the construction lead for a house. Who lives in that house? Is it the architect? No. Is it the homeowner? Yes, you do that too. As an architect, you're going to go build something for someone else to live in, and it better be good. It better fulfill the needs that those people want.&lt;/p&gt;

&lt;p&gt;There are two kinds of architectures that I like. There's one that is simple, elegant, beautiful, fun, and easy to use, and there's one that's quiet, sits in the corner, and gets its job done. Those are the only two architectures I believe should ever exist. Yet we create so many that are horrible for the people who live in those architectures. This advocate SA often comes from a business architect background and will convert over into the tech just because they get fascinated. But they really understand the market subsegment that you're in. They understand your niche, they understand your customers. They may understand the industry to a degree that others don't.&lt;/p&gt;

&lt;p&gt;They're going to take that level of understanding and really make that architecture hum for the people who live in it. And so what is their centaur strength? What is their strategy when it comes to combining their powers with AI? They're going to be able to amplify that empathy by taking all of that data and really understanding it. Take that information that comes from disparate systems, take it that comes from those unstructured systems, chat transcripts, all the information that comes back in, and then really compose something that really understands who those end users are and what they really, really want. So when do you need that advocate SA? When alignment is missing. Whenever it's just something's not clicking, and people are not picking up what you're deploying. They're not really understanding it and getting it.&lt;/p&gt;

&lt;p&gt;You need someone who's going to behave in this way. And the Advocate's challenge? Too much empathy. They're going to try to please everybody. So who's been that person that's tried to please everybody and really not gotten away with it? This is one of those things that I have been really guilty of. Whenever I get into one of those, the trade-off with the tech team, the trade-off with the business team, trade-off with the customers, and then sometimes you get the wrong person that gets their piece of the pie.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2750" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyl3rmvbau4grljh6565.jpg" alt="Thumbnail 2750" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So they're going to have to have someone that tells them, hey, this is good enough, we can move with this, and then we can make adjustments after we ship. They try to over-engineer again. They try to get things too perfect, but they over-engineer from the perspective of all of the different stakeholders that are inside of that system.  All right, so now let's talk about what happens next. This Monday is different than next Monday. You're going to learn a lot of things here at re:Invent, and what I want you to do is to do these three things to become that centaur SA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking Action: Three Steps to Become a Centaur Solutions Architect
&lt;/h3&gt;

&lt;p&gt;Number one, and these are, you know, you can listen to me, these are the kind of prompts that you can take back. Query your architecture. Take a repo, take some of those design documents that you have and say, here is the problem we had six months ago. I want you to do an assessment. Here's why we chose option A. What other options should we consider now? And what would be the trade-offs? Give me a three-year cost profile. Put that into the prompt and see what happens.&lt;/p&gt;

&lt;p&gt;You're going to need some things to make this happen. You're going to need some MCPs. So grab a Model Context Protocol server for AWS knowledge if you're going to build this in AWS. Go grab the cost MCP so that it can actually go query the real cost of these systems, and it'll be able to tell you things that will surprise you. Second, generate a simulation for this POC. So tell an agentic IDE to go generate a POC based on something you don't understand about your architecture. Include some code mockups, give it some resource estimates, and go ahead and get those performance benchmarks. You'll be surprised at what it creates.&lt;/p&gt;

&lt;p&gt;Run that, and then you're going to have real data, real information to bring back to inform your architecture and also to showcase what you know, something different than you had before. Third, you have paper cuts. There's a whole lot of stuff that you can automate that you just have not gotten around to. Automate one thing. Create an automated script that analyzes this data for my architecture. Give me the top 10 compliance rules from Well-Architected that need to be addressed, and then shoot this out to me as an email every Monday.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2890" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11cn84ism92htd7ll325.jpg" alt="Thumbnail 2890" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So you can create something in Amazon Quick Suite, it's called Flows. Be able to do something like this and be able to get that email back out to you to understand your architecture on a regular basis. Something that you should be reporting on anyway, have something report to you. That way you really get that deep understanding on a more periodic basis. So you can generate a health score, right? But don't stop there. Oh, I did it, so pat yourself on the back. No, next you've got to do something. &lt;/p&gt;

&lt;p&gt;So go take one insight from this. For your most junior teammate, go pull them in and actually start them on their journey. You're here and you're learning this right now, but everyone back home, everyone back at the office does not get to hear this. And you do take those people and start their journey as well. And last, reframe your value. Because someone says, oh, AI is doing all of this work, that is not true. You've just heard this. It's a compatible strategy where you're actually collaborating with the AI in order to produce more value for yourself. So you have to reframe that value.&lt;/p&gt;

&lt;p&gt;What's the architect worth? So if you did that POC and say, we evaluated 38 different options for system B, and we were able to de-risk it by simulating a $2 million cost overrun in this POC. Before, it would have taken two weeks, but it took us two hours. Write that down and send that email to your boss, to your peers, to your team that actually helped execute this. This is going to be something where you can cheer for your teammates, you can cheer for yourself, and you actually reframe your value as an architect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=2960" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtpycnkta3wcdg28j570.jpg" alt="Thumbnail 2960" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, summary takeaways. You have new behaviors. Understand that you are now the warden of thought. Your decisions have to be judicious,  and they have to go at high velocity in order to really understand what's happening. You have to judge those things that are happening and try to go pull off the shelf those new classes of problems. You're going to change your role, but you're also going to be changing other people's roles at the same time because they'll come ask you how you are doing that and how can we do that for this key teammate.&lt;/p&gt;

&lt;p&gt;That responsibility shift, make sure that you're going to be able to take from a reactive architecture mindset to predictive architecture. Simulate the changes, take the components, put them together, and actually be informed. Not educated guesses that you're going to learn from in production, but instead really understand your architecture. Go liquidate that tech debt and really pay attention to your architecture all the time. Your personas for your architects, understanding who they are is very important and how they operate. So you may have to be one of these different kinds of architects from time to time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DAh1JHOe56w&amp;amp;t=3030" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uaa44tbjpipcp9agpl1.jpg" alt="Thumbnail 3030" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take all of those four different types, take them back and really understand them. And Dina and I want to ask you one favor at the end of this. Enjoy your re:Invent. Thank you for coming to see this talk as your first talk of re:Invent, and we'd love it if you could fill out the survey for us. Please go become that new centaur come Monday.  Thank you and enjoy your re:Invent. We'll take questions outside in the hallway so we can switch out. Thank you. Appreciate you.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS re:Invent 2025 - Browser Power: Building MCP Tools That Do Real Work (AIM224)</title>
      <dc:creator>Kazuya</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:10:18 +0000</pubDate>
      <link>https://dev.to/kazuya_dev/aws-reinvent-2025-browser-power-building-mcp-tools-that-do-real-work-aim224-3fk0</link>
      <guid>https://dev.to/kazuya_dev/aws-reinvent-2025-browser-power-building-mcp-tools-that-do-real-work-aim224-3fk0</guid>
      <description>&lt;p&gt;&lt;strong&gt;🦄 Making great presentations more accessible.&lt;/strong&gt;&lt;br&gt;
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;📖 &lt;strong&gt;AWS re:Invent 2025 - Browser Power: Building MCP Tools That Do Real Work (AIM224)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this video, Patrick Gryczka, Senior Solutions Architect at Cloudflare, presents on building MCP (Model Context Protocol) servers using Cloudflare's developer platform. He explains how MCP servers enable LLM-based agents to interact with siloed data across applications like Slack and Jira. Cloudflare worked with companies including PayPal, Stripe, Webflow, and Atlassian to launch production MCP servers, with PayPal achieving deployment in just three days. The platform uses Durable Objects for serverless compute with stateful storage, offering horizontal scalability where each conversation gets its own sharded resources. He demonstrates an agent making bookings through browser rendering services, showcasing interactions beyond traditional APIs. Additional features include dynamic worker loaders for executing agentically generated code with near-zero millisecond cold starts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NvXGr6pFip8"&gt;
&lt;/iframe&gt;
&lt;br&gt;
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Part
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwsigja1uq0f17aqmigy.jpg" alt="Thumbnail 0" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to MCP Servers and Cloudflare's Developer Platform
&lt;/h3&gt;

&lt;p&gt;Hello. Very nice to meet you all. My name is Patrick Gryczka. I am a Senior Solutions Architect at Cloudflare, specifically covering our developer platform. Our developer platform is essentially focused around offerings like Workers. It is a serverless edge-based compute offering that gives you both storage as well as workflow and compute primitives.&lt;/p&gt;

&lt;p&gt;What I'll be presenting to you today is essentially a session around building MCP servers. We have worked with a lot of companies to help them bring their first production MCP servers live this year. We worked with a large batch of companies earlier in about April or March of this year to push things forward in a nice wave. We'll start, I guess, at the very start in terms of what is MCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=60" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfw5cajxy0xq25p10xvt.jpg" alt="Thumbnail 60" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=70" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F546uooruzfhucc53onsh.jpg" alt="Thumbnail 70" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MCP was defined late last year by Anthropic. MCP is Model Context Protocol.  MCP servers essentially are acting as almost similar to what you might have looked at for APIs in the past as a means of allowing interoperability between siloed data among different applications. What MCPs now serve as are essentially an interface that allows for your LLM-based agents to interact with that otherwise siloed data.&lt;/p&gt;

&lt;p&gt;So if you want to have an agent be able to update a Slack message, update a Jira ticket, update any of this content that's available across specific services, an MCP server can act as essentially that interface for providing tools. A tool will be, depending on the complexity of it, something as simple as surfacing up an existing API, but it can also be more complex. It can have more custom logic, and it can also have more flexibility because you can still rely on an LLM to not only call an individual API, but to actually make plans and to really execute more complex tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=150" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gqs5otkmbvbwpe3k5fb.jpg" alt="Thumbnail 150" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we believe that we have a pretty compelling offering and way for you to launch MCP servers that allow you to both move quickly, maintain developer velocity, while minimizing operational overhead. I think there are some varying opinions in terms of what the future looks like in terms of how many MCP servers organizations may run. Just of our own kind of Cloudflare remote MCP offerings, we have, I think, about 13 of them that run the gamut from observability MCPs that surface up your logs to your coding agents to documentation ones, to ones that even offer up some of our services like Radar as availability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=210" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpur9i011cgimpawodxpb.jpg" alt="Thumbnail 210" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I think there will be an aspect where giving customers the ability to make granular choices in terms of what features and functionality they want to incorporate into their workloads is probably going to be a bit of a winning strategy there.  Touching back on our initial demo day earlier this year, companies like PayPal, Stripe, Webflow, Block, Atlassian all worked with us to launch their first MCP servers. It's also why we feel very confident saying that we're one of the fastest places where you can build an MCP server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=240" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3v5fuvpzuosbgurefcz.jpg" alt="Thumbnail 240" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PayPal worked with us to within three days go from a concept of what can they deliver to essentially having a production-ready MCP server for their invoicing scenarios.  In terms of what you get with an MCP server on our platform, you can use our Agents SDK. An MCP server is not a kind of packaged product. It is built on top of our primitives.&lt;/p&gt;

&lt;p&gt;So essentially your MCP server will be an abstraction on top of services like our Durable Objects. Our Durable Objects offer serverless compute tied with stateful storage. So that allows you to have both scale-to-zero compute for the actual calls that are made to your server, while still maintaining a stateful backend that is able to maintain both your short-term and long-term memory for your ongoing conversations.&lt;/p&gt;

&lt;p&gt;This also runs on our distributed architecture, so you get the benefit of having a single global data plane. So regardless of if you have users coming in from the United States, Europe, Asia, without having to manage separate control clusters, manage separate updates across regions or availability zones, you get the benefit of having localized compute for each of them and localized storage for their conversations without operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=320" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ptqh48wf738d8gorie1.jpg" alt="Thumbnail 320" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MCP is, well, many of you probably already  have many conversations around MCP, but part of the reason for the excitement is that this feels like a new channel. This feels like a means of now making services available through essentially agentic interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=380" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3fek6cpi9qk3lqm2afd.jpg" alt="Thumbnail 380" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It feels like if companies, whether those are public services like public sector interfaces, or even if those are more like e-commerce interactions, it feels like if companies aren't surfacing up their products, their offerings, their means of completing conversions and completing transactions and surfacing up information through an MCP server, they may be passed by. Because to a degree, this feels like how information is going to be surfaced up through this interface that seems to be kind of encroaching on what search has traditionally  covered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=390" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjvhcyzlh2bio2gwidy2.jpg" alt="Thumbnail 390" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=400" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe25rxg3110qmozhziy9l.jpg" alt="Thumbnail 400" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=410" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhfb6sjgvoj56to08f2p.jpg" alt="Thumbnail 410" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Demonstration and Technical Architecture of Durable Objects for MCP Implementation
&lt;/h3&gt;

&lt;p&gt;So here,  I have just a simple  chat interface that plugs into one of our MCP servers. What this MCP server is doing is it is interacting within the durable object. It has access to all of our services, so it is able to make use of services  like our browser rendering services in order to actually interact with web pages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=470" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnux70i4sz1w6vw8y4mhr.jpg" alt="Thumbnail 470" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=480" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i5mp9ss7wgcgsp50587.jpg" alt="Thumbnail 480" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=500" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli20bpgck7vw963lxyux.jpg" alt="Thumbnail 500" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your agents may not always have clearly defined APIs that they can call and interact with, but using options like essentially browser rendering, which gives you a headless browser and allows you to then both scrape content from it in HTML, Markdown, JSON format, but also have puppeteer-style interactions in terms of clicking buttons, entering text content, you're able to actually interface with pages and allow your agent to plan interactions without needing to beforehand fully have API key access to the system you're interacting with. Here, specifically, I'm just going to go ahead and ask it to make a booking for me for a given time slot. So this  interface is just meant to be a little showcase of court availabilities across courts, let's say one, two, three, four, five, six for a given park. I'm just going to go ahead and ask it  to. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=510" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1veqlkftcuau7jgw5mk0.jpg" alt="Thumbnail 510" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=530" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8yf65t8xwtisrhm4hrm.jpg" alt="Thumbnail 530" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=540" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgdrf2no9dzf6mjntu8d.jpg" alt="Thumbnail 540" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can do a little bit of a showcase here, so here we can see that it's passing in the information that's available.   So that should finally come through. In terms of interesting bits, that largely just is meant to showcase that you can have these interactions not just go through tailored paths, like a path that's well defined through an API, but you can actually have your agents dynamically pull in what is the structure of a page and make it actually interact with a given UI interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NvXGr6pFip8&amp;amp;t=580" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbk16od7t0mm545q447s.jpg" alt="Thumbnail 580" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In terms of underlying value that you get from the solution,  it really does in my mind come down to the fact that a durable object is going to be able to offer you remarkable horizontal scalability to where each session, so each of those interfaces that come up for a chat is tied to a single durable object instance. A durable object can instantiate across IDs, so programmatically, every conversation that we're having gets its own sharded compute, sharded storage,&lt;/p&gt;

&lt;p&gt;so you don't have to worry about suddenly scaling that out when there's more traffic coming in. You don't have to worry about scaling out your memory and storage when the millionth conversation has started, because that is automatically horizontally scaling across your newly instantiated resources. It also gives you a decent bit of flexibility where that memory is still tied to that individual Durable Object instance. So if you're looking to have conversations tied to a single user, you can have that easily implemented by having them tied to a single ID, but you can also have multiple individuals interacting with a single agent conversation just by tying their conversation back to the same ID.&lt;/p&gt;

&lt;p&gt;So you get the ability and flexibility to have various interaction patterns, while also having an easy routing method if you do decide to make multi-agentic interfaces for having them interact with separate instances. If there are any questions that come up throughout implementation or approach, there's quite a bit of content that we have regarding various SDKs and various tooling that we have to improve agent workloads.&lt;/p&gt;

&lt;p&gt;So beyond just browser rendering, we now also have dynamic worker loaders. Dynamic worker loaders allow you to dynamically provide, whether it's JavaScript or TypeScript code, that is ideal for scenarios where you may be actually generating code that you want to execute yourselves. That can be provided to a dynamic worker loader and have an execution with, again, the same performance you get from a Worker. So near zero millisecond cold start times, to the millisecond compute billing, so very attractive for both sandboxing use cases, but really that execution of untrusted, agentically generated code.&lt;/p&gt;

&lt;p&gt;Thank you so much for your time, and let me know if there are any questions about our offerings.&lt;/p&gt;




&lt;p&gt;; This article is entirely auto-generated using Amazon Bedrock.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
