DEV Community

Cover image for AWS re:Invent 2025 - Human-Centric AI: Avoiding the High Cost of Getting It Wrong in IT (AIM205)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - Human-Centric AI: Avoiding the High Cost of Getting It Wrong in IT (AIM205)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

đź“– AWS re:Invent 2025 - Human-Centric AI: Avoiding the High Cost of Getting It Wrong in IT (AIM205)

In this video, Jeff Hunter, Field CTO at NinjaOne, discusses implementing AI in IT operations and endpoint management. He explains how NinjaOne manages devices across multiple platforms and addresses the challenges IT teams face with remote work and device proliferation. Hunter examines three AI categories: generative, agentic, and predictive AI, highlighting NinjaOne's Patch Intelligence feature and Katana AI assistant. He emphasizes the importance of guardrails, auditability, and human oversight in AI implementation, arguing that AI should augment rather than replace human expertise. The presentation advocates for a human-in-the-loop model where AI handles routine tasks while humans maintain strategic control, stressing the need for continuous training and questioning of AI outputs to ensure safe and effective autonomous endpoint management.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction to NinjaOne and the Growing Challenge of Endpoint Management

Let's get started with our first topic. For those not familiar with NinjaOne, it's very cool software that can help with IT operations for your organization. I'm excited for our next speaker, Jeff Hunter, who will talk about how to get it right when leveraging AI in the context of operations in a scalable format. So without further ado, Jeff.

Thank you very much. Hello everyone. My name is Jeff Hunter, and I'm the Field CTO at NinjaOne. I'm always curious who has heard of us, so just a quick show of hands—who has heard of NinjaOne or maybe knows what we do? Bonus points if you're an actual user.

Thumbnail 80

NinjaOne makes all the tools that IT teams need to manage devices. This could be to patch your devices, to alert whenever there's some sort of issue, to patch software, and to remotely access devices via screen share so you can help solve problems in real time. We do that across Windows, macOS, Linux, iOS, iPad OS, and Android devices.

Thumbnail 90

Having a platform to manage all of these devices is really important because businesses run on more endpoints than they ever have at any point in modern history. We're all used to an endpoint being a typical laptop or a server, but they could also be manufacturing equipment in factories, ticket kiosks, or digital billboards that you see on the side of the road. In my own personal experience, they are the movie theater projectors that we see movies on when we go to the cinema. On average, we're touching about four or more of these devices every day just as a part of our regular work lives.

These devices all need to be secured and supported over the course of that device's life cycle. The last five years have been the biggest, most seismic sea change in the IT industry ever. Overnight, organizations had to accommodate people working remotely, not in an office, but spread across different geographic locations. It has been a slow journey to put the toothpaste back in the tube on that particular front. We still see a ton of hybrid and remote work persisting even more than five years after it first began in earnest in 2020.

As always seems to happen, this means IT teams have to do more with less—more users to support, and those users are spread apart geographically in a way that they have not traditionally been. There are more devices to cover, and those devices are of more varying types, not just the traditional laptop and server that we're very familiar with. This is all happening in an era with a ton of merger and acquisition activity, shifting compliances, and different auditing standards that all need to be adhered to and accounted for.

In my experience, it seems like IT teams almost have to massively overproduce in order to meet whatever their goals happen to be. I would compare it to trying to roast a marshmallow over a campfire, except you're fifty feet above the campfire. You have to produce an enormous amount of energy just to get a little bit of the heat to go to the right spot. The solution for inefficiency is being more efficient, groundbreaking, I know. But really what we want to do is bring that marshmallow a little closer to the fire.

I promise I'll abandon that metaphor now, but really we want to take our more easy issues and automate them entirely. We want to take our more difficult, time-consuming, and multi-step troubleshooting processes and streamline them. We want to minimize the need for human involvement as much as possible. Now we introduce AI into the equation.

Thumbnail 260

Navigating the Risks and Considerations of AI in Endpoint Management

I have had conversations with thousands of IT teams over the past five years, and there are certain themes—common needs and common questions that start to emerge as a part of those many conversations. One of the questions I think I've heard more than any other is: what are you doing with AI now? Fundamentally, that question is asking how are you taking this new and exciting technology, what I view as fundamentally a new tool, and how are you applying that to the product's specific area of focus.

In the early days of AI bursting into the mainstream, my question was: what would you feel comfortable putting in the hands of an AI? Because there's a ton of potential for applying AI-based systems and methodologies into endpoint management.

This requires careful consideration. NinjaOne's first foray into an AI feature involved scraping public data from the internet to analyze sentiment about different Windows patches. Whenever there's a significant uptick in complaints about a particular Windows KB, that's a sign the patch is causing problems. We summarize all of that data in a brief description right in the menu where you actually approve those patches, giving you a quick and easy way to determine whether a patch will solve more problems than it creates. This is a great application of AI, as we all know it excels at taking massive amounts of data and condensing it into an easy-to-digest snippet.

However, there's also the possibility of AI taking more direct action. The ambition and dream is that AI will be able to perform autonomous endpoint management, where the first response to an issue comes from the AI or the system that's been set up, rather than necessarily requiring a human being to be directly involved. That's the ambition and the dream, but AI is a brand new technology, and with any new technology, there is considerable risk. As technology evolves, we tend to view that risk as more of a known quantity.

An AI is only as good as the documentation it's trained on. If that documentation is wrong, if the information it's trained on is outdated, or if the data set is incomplete, the AI will draw wrong conclusions. The human being using that AI will then amplify those wrong conclusions and spread misinformation. In my experience, correcting that type of misinformation is really difficult. People latch onto their first impression, what they hear first sets in their minds like concrete, and it can be very difficult to change.

If we apply this type of risk to an endpoint management platform, we're magnifying the potential for undesired outcomes substantially. At a core level, this is a marriage of a very powerful tool, which an endpoint management platform must be by design, and an AI that gives you the ability to make decisions very quickly. We probably all agree that it's a bad idea to make bad decisions quickly. That's the last thing you want to do. For example, some endpoint management tools give you the capability to generate code and scripts inside the platform. This can be convenient, but imagine deploying a script generated by an AI to all of your devices, and the AI decides that the shortest distance between A and B for freeing up additional disk space is simply to delete everything by running the format system drive command. That would be about the last thing we would want to happen.

When using any AI tool, we have to ask ourselves not just one question but many questions. For instance, is there a repeatable process in place to ensure we can update the information the AI is making decisions on on a regular basis? Going back to my earlier example about NinjaOne's patch intelligence AI feature, when we're scraping for data on patches, that sentiment will fluctuate over time. We need to account for the fact that sentiment might not be negative when a patch is first released, but as more data comes into the system, we become more aware that the patch could be problematic.

We also need to plan for the worst-case scenario that could arise from misuse or bad implementation of AI. As part of that planning, we must ask ourselves what guardrails are in place to ensure that doesn't happen. If something undesired does occur, how is that decision-making process tracked? How can we audit to understand how we arrived at that particular decision so that adjustments can be made to prevent it in the future?

Thumbnail 570

Categories of AI Implementation: From Generative to Predictive and Agentic

I want to briefly discuss a few categorizations of AI use as it applies to endpoint management. We're all very familiar with generative AI. Everyone here has used ChatGPT, Gemini, or Copilot at one point or another. We've all asked it to answer a question or do something for us. Flipping the switch, we have agentic AI, which takes action and performs routines, ideally without the involvement of a human being. For example, if a ticket comes into a help desk and is recognized as a common problem with a known solution, we could automatically apply that solution to the ticket as a first step before actually involving a person.

Escalating to a human technician is a great way to streamline and save time as a part of the help desk process. However, the danger becomes a lack of adequate guardrails around the action that you're taking. In my example, I mentioned applying a known solution to a problem, and I say a known solution because the danger of an ad hoc generated on the spot AI-based solution is pretty great. When you do that, you could be exacerbating the problem more than you're actually solving it.

I don't know if anybody's ever tried to ask an LLM the same question multiple times. You often sometimes get five similar answers, maybe a little similar. You might get five answers that are really different, but it's pretty rare to get the exact same identical answer every single time you ask that question. I don't think anybody's getting five out of five on that. And that's of course because that answer is being generated in real time.

Thumbnail 680

On the predictive AI front, this is obviously taking a massive amount of data in and then giving us insight into what the future health of the device looks like and helping us prevent future problems from occurring. In action we see this in terms of taking a ticket, summarizing the ticket that we were seeing there, and then potentially even producing a potential answer to that ticket. In NinjaOne we see this with our AI assistant called Katana because we do everything ninja themed, and so that is an AI assistant that we have that's in beta right now.

We also see it in the form of that patch intelligence AI feature that I mentioned earlier, which is surfacing data about patches when there's problems so that you know about it. But then we take it a step further in terms of more of an agentic AI approach to where we're actually taking those patches that have negative sentiment and we're removing them from the deployment queue. So they're actually not going to even deploy those patches. It's going to reserve it so that a human being can look at it and decide if this is actually applicable to my devices or not.

And then of course we can predict what is going to happen to devices in the future based off of historical trends. So we have, we're working towards a world where we can track historical data for CPU, for RAM, measure the digital user's experience on the endpoint, and then give a stoplight system. So green, everything is good. Yellow, maybe everything is not so good on the device. And then red, the digital user experience is severely degraded. That gives us insight into how all of our end users are being impacted by the performance of their devices.

Thumbnail 770

Now there is certainly a narrative around AI that I've heard a lot of, which is that it is about replacing people. Of course there's always the extent to which that's going to be true in certain jobs and in certain industries, but I would also say that a true replacement of a sizable chunk of the workforce is against historical trends. The industrial revolution was a long time ago. There's been a lot of invention since the cotton gin, and we all still have jobs. We're just not farmers.

The invention of the assembly line may have looked at the time like that was going to be a process that eliminated the need for people to be involved in things like manufacturing cars and different goods. I think we now know that that is not necessarily the case. So at NinjaOne we definitely recognize the tremendous power of AI-based tools, but we also strongly believe that the value of those tools comes from being tied to the human beings who are actually going to be using them.

Because if we look at the different categorizations of AI that we've discussed, what we end up with is the potential for a really powerful automation engine. Again, where that first touch is not going to be a human being being involved, it's going to be trying all the attempted solutions, the possible solutions that we view as safe, and seeing if that fixes the issue before we then elevate to a human being. The goal is to help scale with business growth. The goal is to improve the device footprint, the number of devices that a technician can effectively manage.

But human beings are fundamentally an intrinsic part of endpoint management, even as automation and AI might change the day to day duties for those who are working on the help desk. And so in this context we need AI to be a reliable tool. When somebody picks up that tool, we need it to work. Not every AI is created equal and not every AI is going to be equally adept at your real world workflows. If you give somebody a Phillips head screwdriver when they need a flathead, you're pretty close, but it's still not quite right.

And in the same light using a generic LLM is not likely to give you the good experience that you are going to get from something that's purpose built for the specific area of focus that we're working in.

Quite honestly, people don't use tools that they don't trust, and at the very least they tend not to use those tools very well. It's important to know how an AI came to a particular decision or outcome. The auditability and trackability of AI, I view that as just as important as the overall capability of what an AI is able to accomplish.

Without that risk mitigation and without that transparency, there is not going to be any trust between the human who is using that tool and the tool itself. At the end of the day, a human being is always going to be the one who's on the hook for a decision that is made by some sort of AI-based tool. At the heart of long-term AI success, I think that's built at the intersection of this new emerging technology and the human beings who are going to be using it.

The Human-in-the-Loop Model: Training and Trust in AI-Augmented IT Operations

A key part of adoption of any new technology is the humans who use it. If you give me a saw and you give my dad a saw and you ask us to build a table, his is going to be better. He is better at that type of tool making. It's always important to remember there's a human being at the other end of software, and so consistent refresher training as this new technology continues to develop is paramount for critical ongoing success.

Thumbnail 1000

As AI becomes more integrated into IT workflows and into endpoint management workflows, organizations have to be proactive in preparing the people who are going to be using those tools to not only use them effectively but to oversee them and manage those tools effectively. There is a human layer that is necessary there to understand, to interpret, and to question AI-driven outputs so that we can use these systems not just efficiently but also safely.

I think we envision a world where endpoint management becomes less dependent on individual human skill sets. To use another metaphor, it's not so much about playing the individual instruments of the orchestra, it's about conducting the entirety of the orchestra at large. IT staff need to be trained to understand where and how AI is being applied. They need to understand how to interpret AI outputs as well as question them.

I would argue that the time to question them is always. Blind trust in AI decisions does lead to significant risk. Much in the same way that web designers use dark patterns to influence where our eyes are drawn and where we're most likely to click, and in the same way that early versions of iOS had sort of a textured look to it encouraging you to interact with the screen, a lot of the times LLMs give you overwhelmingly positive responses to benign queries and suggestions. This is all done to create this positive feeling in the actual user so that they come back and want to use it more, which is not necessarily the goal that you might want to have when managing endpoints and trying to improve business efficiency.

Thumbnail 1120

Given the quickly evolving AI landscape, continuous skill development is critical. Ongoing education, best practice groups, and regular meetings for cross-functional knowledge sharing are essential. I think having maybe a weekly or monthly meeting where a portion of that is dedicated to discussing the best suggestion that you had from an interaction with AI, and then what is the worst thing that happened, the worst thing that an AI suggested to you this week or month, helps level set amongst the team.

Integrating new technology is really difficult. AI is new, it's exciting, and it's having a momentous impact on how all of us work, but it's also having an impact on how our individual devices are managed. It's important to remember that tangible long-term benefits resulting from the integration of any technology are going to come down to how we align human expertise.

Thumbnail 1180

Training should always emphasize the importance of a human-in-the-loop model where AI is augmenting human knowledge and judgment, not substituting it and not replacing it. This is where NinjaOne is headed: autonomous endpoint management, lessening the need for human involvement for the most common and most annoying tasks, freeing up those resources so you can work and think more strategically. Taking a less experienced technician, allowing them to operate at a higher level and putting up those guard rails to prevent dangerous solutions and dangerous decisions from being implemented through the power of endpoint management.

Thank you everyone for being here. If you want to chat a little bit more about this, come visit us at booth 1216 right over there. My name is Jeff Hunter, and thank you very much for being here. Take care.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)