🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025-Building workflows of your choosing: agents, copilots, and everything in between
In this video, Brad Rumph, Field CTO at Tines, demonstrates how IT can strategically govern AI-driven workflows by blending deterministic and agentic approaches. He presents a live demo using AWS CloudWatch and EC2 instances, showing how Tines orchestrates infrastructure management with AI agents and human-in-the-loop oversight. The demo features automated CPU monitoring where utilization above 96% triggers case creation and Slack notifications for engineer approval before instance resizing. Rumph emphasizes that 86% of IT leaders consider orchestration critical for scaling AI, based on Forrester research with 400+ C-suite executives. He highlights Tines' stateless, private AI features running on AWS Bedrock with Anthropic, offering full auditability and governance while allowing organizations to build custom agents aligned with their SOPs and policies.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
The Strategic Challenge: Governing AI-Driven Workflows in Modern IT Infrastructure
Good morning everyone, and I hope you're having a great conference so far. My name is Brad Rumph, and I'm the Field CTO at Tines. It's quite remarkable to think that I've been working in this space with model-driven tools, iPads, API gateways, and AIML for the last 25 years, with the last 3 years being deeply focused on generative AI and where we find ourselves with agents today. It's been quite a journey, and now we're working with systems and platforms that are architected radically differently. We have RAG, which is almost a thing of the past, and we've got MCP, the Model Context Protocol, along with many other protocols I'm trying to keep up with, including the agent-to-agent protocol and the ACP protocol, which is now being merged into A2A.
Today we're going to cut through all the noise and focus on how IT can strategically govern AI-driven workflows. We're going to explore how to blend different types of workflows, from fixed deterministic workflows to highly dynamic agentic ones. We'll also touch on where human judgment is added and where that matters the most. Our focus will be on infrastructure management using AWS for your infrastructure and leveraging Tines for your AI orchestration, and how you can securely scale without losing control.
We all know that AI is moving incredibly fast, and IT is being asked to do more than ever. They are managing system scaling, compliance, and now integrating AI into the mix. The opportunity is huge, but so is the complexity. How do you manage all these things? The challenge for IT isn't whether to use AI, but how to deploy it safely, strategically, and cost-efficiently in a way that's fully auditable.
Agentic workflows, which we define as autonomous systems acting across various different tools, can unlock faster, smarter decision-making while keeping IT in control. Eighty-six percent of IT leaders believe orchestration is critical to scaling AI. This isn't theoretical. We worked with Forrester and talked to over 400 people in the C-suite, SVPs, and directors of engineering at all different levels, and we came up with a bunch of statistics, and this one really jumped off the page.
So what does a modern IT environment look like? Modern IT environments today are defined by three core pillars, and these all create a very dynamic ecosystem of both complexity and opportunity. If we look at the left side, we see infrastructure. This is your digital fabric, your systems, your data, your APIs, your tools that you want to connect to. This could be AWS, other specialized SaaS platforms, or maybe you're in a hybrid cloud environment, or running some things on-premises. We see that a lot with our customers. This layer generates millions of different signals, alerts, and events every day. Tines connects to every single one of those, and we can transform these very raw, high-volume types of alarms and events into highly orchestrated audible actions.
If we move over to the right and look at intelligence, this is what we deem the cognitive layer. With the rise of AI, copilots, and autonomous agents, these systems can reason and act, but they really need to have some boundaries to operate safely. Tines is here as the definitive control plane, and we ensure that every intelligent action is governed, explainable, and inherently safe. If we shift down to the bottom and look at people, we view this pillar as the strategic operator. These are the decision-makers, the folks dealing with strategic priorities, managing critical exceptions, and holding the final accountability. With Tines, we embed human judgment into your workflows, and we allow humans to review, approve, and intervene where it makes sense to ensure that the most critical decisions are always informed and timely. At the intersection of what you see in the middle is Tines, an intelligent workflow platform that keeps IT operations secure, intelligent, and aligned with business goals. This is all wrapped in a layer of control.
Blending Deterministic and Agentic Workflows: Building AI Agents with Tines
When we look at the evolution of workflows, we all rely on highly deterministic workflows. These are fixed logic with predictable results, and they work well for tasks like provisioning or deprovisioning. However, when we look at context and context shifts, or when we have to make decisions that depend on nuance or judgment, that is really where agentic workflows can shine. Agentic workflows excel at interpreting, reasoning, and adapting. The goal is not to replace one with the other, replacing deterministic with agentic. We really think it is a combination of all of these.
I would like to introduce the concept of AI agents in Tines. The definition and requirements can vary in terms of what agents look like. You can go to every single booth here and hear somebody talking about what agents have and what they do. The way that we view it is very much like the workflows that you can create in Tines. We believe that these workflows should belong to the people that own them, and those people should be the ones that create them. We are not going to force our own definition of what an AI agent looks like. We are going to give you the tools to build your own agents.
Agents should be part of any workflow process, and there is a massive degree of orchestration that has to happen, especially when you roll these things out company-wide. When you do that, prescribed approaches simply will not work at scale. However, the ability to tailor these agents to individual business needs will. We offer pre-built stories out of the box that you can look at as accelerators, as well as pre-built templates and API integrations. We allow our customers to take those, tailor their workflows to their needs, and we really provide the same type of ability with building your own agents.
Take your standard operating procedures and policies, and really focus on building the prompts that differentiate your business from your competitors. We really encourage and work with our customers to help them through that process. The core idea is that you can scale execution without sacrificing any of the oversight. So how do you decide what type of workflow you are going to use, deterministic or agentic? Well, if risk tolerance is critical and it has to be perfect every time with highly predictable outcomes, then you should stick to deterministic. If you need flexibility and the outcomes can vary, then agentic may offer some speed and adaptability, and really give you that flexibility that you need within your process.
Within Tines, we blend both. You can script the rules, you can layer in reasoning, and you can also insert human checkpoints when and where they are necessary. No automation is complete without human judgment. We often say humans can handle the gray areas, priorities, and exceptions. We believe that human in the loop is not a fail-safe fit, but rather a deliberate design choice. It is something we all know—somebody should really take a look at something. It really deserves those nuances or maybe the context needs a human to review it. You can choose what runs automatically and what needs review.
We offer a couple of different flavors of agents within Tines. There is chat-based, where a human can be in the loop and interact with it, or it can run completely task-based or autonomously. We are going to tee up a sample use case. I am going to show you a story that we pulled from our library in Tines. We configured it to look at some alarms that are going to be taking place, monitoring and configured against some of our EC2 instances for our leading FinTech application. We are really going to focus on when our CPUs are trending or running hot, where there is really high CPU utilization.
When the usage and utilization is normal and we are looking at these patterns, we will make certain decisions, maybe just to run a flow that operates autonomously.
However, if something is unusual or abnormal and doesn't look right—perhaps we're running over a certain CPU percentage or threshold, say 96%—then we might want a human in the loop. We might route a notification to the engineer on call in Slack and have them look at it and decide whether to approve or disapprove a particular request to upsize or downsize an instance. We look at this as predictive availability where deterministic workflows can handle routine types of tasks, while humans step in only when something looks off.
We believe that workflows are evolving, and the future is a thoughtful combination of human-led, deterministic, and agentic approaches. We don't put you in one box or the other. We give you full autonomy to choose what makes sense so you can blend all three together within a single workflow or automation. We believe that intelligent workflows are truly the key to unlocking AI's true potential.
Before we hop into this story—and again, a story is a workflow in Tines speak—it's worthwhile to note that Tines builds all our AI features with trust front and center. All our AI runs stateless, private, in region, and is tenant scoped. There's no networking, training, storage, or logging of any of your data. You can use our Tines hosted model. We partner with Anthropic and run on top of AWS Bedrock, or you can bring your own models from OpenAI, Google Gemini, Mistral, or Llama. We give you a lot of options, but out of the box, we provide Anthropic models under the hood.
Live Demonstration: Intelligent EC2 Instance Management with Human-in-the-Loop Oversight
Your fearless leader, Gavin Belson, is becoming incredibly worried lately that Pied Piper is figuring out how to use intelligent workflows and AI agents to scale and manage their infrastructure much more efficiently than Hooli's. He's tasked his engineering team and SRE teams to look into how AI agents can be inserted into workflows and play a much bigger role in helping manage and maintain Hooli's infrastructure. What you're going to see is intelligent workflow automation and orchestration with a human in the loop, AI agents, and cases within Tines.
We're going to start by looking at our Hooli Prod1 instance, an EC2. We can see that it's running with an instance type of t2 large. We're going to pop over into Tines. This is a Tines story. At the top, we're looking for alarms that we're getting from AWS CloudWatch. We have a couple of different triggers—basically conditionals that go this way or that way. On the left-hand side, if CPU utilization is greater than 90% but less than 96%, we go down that path. If it's greater than 96%, we go down the path on the right-hand side. If we go down the right, we create a case and send it for approval in Slack. If it's approved, we update the case and update the instance size by upsizing or downsizing it in EC2.
Up here you can see that an event just came in from CloudWatch. We get this alarm, and when we look at it, it's greater than 96% CPU utilization. We're going to create a case in Tines. We're using an AI agent to do that. We have some system instructions and a very simple prompt that says all we want to do is create a case in Tines, and we give it some metadata.
When I look at how the agent is actually doing this, we can see exactly the decisions it's making at runtime. We can see where it's created a case and is adding all the metadata for that particular EC2 instance. When we pop back over into the story, we can see that the case update has been completed, and we've now progressed to sending an approval to the engineer on call in Slack. The engineer can either choose to approve or not approve this request.
We're going to pop over to Slack. You can see we've got this channel set up, and a notification came in about an abnormal EC2 instance over 96% CPU. We have the instance name, instance ID, and a link to the case details that our AI agent created where you can get additional details. The engineer is asked if they want to approve, and they click yes. Come back into the story, and you can now see that we've progressed to determining what the instance size should be and updating that particular instance accordingly. Again, we can look at all the events that are flowing through.
Here's another very simple prompt instructing the agent on what to do. I hardcoded the instance ID here, but earlier I pulled it in through metadata. We can see the instance details, showing it's currently a t2.large type, as we saw at the beginning of the presentation. We can see what VPC and subnets are involved. Here we get the scaling decision and implementation that the agent is looking at based on our prompt. We get a summary of the actions taken. You can see that the agent has made the decision to upsize the particular instance from a t2.large to a t2 2xlarge. We come back to the story and update the case. We can see exactly what's happening while this is running. Scrolling down to the bottom, we can see the agent's thinking and a case update summary. It provides resolution information and the root cause of why it made that decision. It's very important to note that we store all of these events, so you can retrieve them at any given time, which gives us the auditability and governance capabilities we provide within the platform. The case has been completed and automatically tagged. The agent came up with these tags based on what the events look like. Here's a recap of that particular case. High CPU utilization was detected, here's what the previous state and configuration were, and here's the resolution and the recommendations that were made for the upgrade. We go back to EC2 and look at our HOOLI-PROD-1 instance, and we can see that it's now running and has been upgraded to a t2.2xlarge. So what did you just see? What you just saw is how you can seamlessly build deterministic and agentic workflows with human in the loop and human oversight added in. I showed you how easy it was to configure an AI agent within Tines. It took me literally a couple of minutes to write the prompt, though it needed some tweaking. We integrated directly with Amazon's CLI to do all the upsizing that you saw. In terms of cases, we have really robust case management capabilities within the Tines platform, which enables all on-call engineers or anybody within the organization to collaborate on that particular case and see exactly what happened. With very little instruction within the prompt, we got a lot of detail auto-populated automatically within that case in Tines.
With Tines acting as the intelligent workflow platform, you don't need to be an expert in Java, Python, or scripting to eliminate muck work within your organization. With Tines, you can truly move on and make the move to AWS. This isn't about launching a single AI agent; it's about governing every workflow from end to end. IT shifts from being a gatekeeper to an orchestrator, providing secure, intelligent execution at scale. With AWS and Tines and your policies, your SOPs, and the prompts that you create, you can gain speed without losing control. AWS provides the reliability while Tines really provides the control. Your policies and prompts define how it all comes together in the end.
I'm going to leave you with this question, and I want you to think about it. From an infrastructure perspective, what's one process or more that you would leave to run completely ungoverned and completely autonomously? And what are some processes where you would deem that a human should be in the loop to provide that oversight? That's really where orchestration meets opportunity. The best way to understand Tines is to see Tines in action. I want to encourage everyone to come by booth 1849. I'd be happy to continue the conversation or answer any questions you have. I appreciate you all coming out and I hope you enjoy the rest of the conference. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.


































Top comments (0)