I Built an AI Agent That Actually Runs My Windows Servers — Meet S.E.N.T.R.I
I got tired of AI tools that stop at suggestions.
You describe a task, the AI writes a script, and then... you're still the one who has to verify it, paste it somewhere, and run it. The human is still the execution layer. That always felt like the wrong split to me.
So I built S.E.N.T.R.I — the Server Engine Neural Task Runner & Intelligence — as the agentic core of ServerEngine. The idea is simple: you describe what you want in plain English, S.E.N.T.R.I figures out the plan and picks the right tools, and ServerEngine actually runs it.
No copy-pasting. No manual triggering. The loop closes on its own.
How It Works
You Command → S.E.N.T.R.I Orchestrates → ServerEngine Executes
- You Command — plain English: "Check availability on all managed servers and post a Slack summary"
- S.E.N.T.R.I Orchestrates — it interprets your intent, selects the appropriate Skills, and builds an execution plan
- ServerEngine Executes — the Skills fire inside ServerEngine's automation engine
That third step is the part I care most about. A lot of "agentic" tools today are really just planners. S.E.N.T.R.I actually runs the work.
You Choose the AI Model
I didn't want to lock anyone into a single provider, so S.E.N.T.R.I supports three options:
| Provider | Models | Best For |
|---|---|---|
| Anthropic Claude |
claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
|
Complex multi-step IT reasoning |
| OpenRouter | Any listed model (Gemini, Qwen, Mistral…) | Flexibility via a single API key |
| Ollama (Local) | Gemma4, Llama3, and more | Air-gapped, fully offline environments |
The Ollama option was important to me. A lot of the environments this is aimed at — regulated industries, security-conscious IT teams — can't send data to the cloud. Fully local means zero cloud dependency, no exceptions.
Skills: The Tool Layer S.E.N.T.R.I Calls Into
Skills are the callable tools S.E.N.T.R.I uses when executing a task. I ship a set of built-in skills with ServerEngine:
- Web Search
- Slack Message
- Check Time
- Plan Execution
- Check Dashboard
- Check Availability
- Check Logs
- Live Console
These handle the most common operations out of the box. But the more interesting part is the extensibility.
Your Existing Scripts Become AI Skills
This was the design decision that changed how I think about the whole product.
A runbook in ServerEngine is a single PowerShell or Python script — or a chain of them. Steps can pass variables between each other, reboot mid-sequence, and keep executing. But the important thing here is: you probably already have scripts like these. You don't need to rewrite anything.
Any script or chain of scripts can be promoted to an AI-callable skill with a single toggle in the Runbook Creator. Here's the flow:
1. Flip the toggle
Enable "Enable Skill" on any runbook. It's immediately part of S.E.N.T.R.I's toolkit.
2. Write a plain-English description
Something like "Use to reboot a server" or "Run when a full security audit is needed". S.E.N.T.R.I reads this at startup to know when to invoke it. The richer the description, the smarter the routing.
3. Set Approval Required (optional)
For sensitive or destructive operations, you can require human confirmation before anything fires. S.E.N.T.R.I pauses and waits. Certain actions should never run without a human in the loop.
4. Automatic discovery
New skills are picked up at the next session. No reconfiguration needed.
The Part I'm Most Proud Of: S.E.N.T.R.I Knows Nothing About Your Scripts
This is the security model, and it was a deliberate design choice.
S.E.N.T.R.I only knows two things: what skills are available and what hosts exist. That's it. It has zero visibility into:
- The content of your scripts
- Your credentials
- How authentication works
- What those scripts actually do internally
All of that — execution, authentication, credential handling — is managed entirely by ServerEngine's automation engine, completely separate from the AI layer.
This means when S.E.N.T.R.I decides to call the Reboot-Server skill, it doesn't get handed your WinRM credentials to do it. It just tells ServerEngine "run this skill on this host" and ServerEngine handles the rest using its own secure, encrypted credential store.
The AI is the brain. It never touches the keys.
This separation is what makes it practical to use AI in real production environments. You're not feeding sensitive infrastructure details into an LLM — you're just letting it decide what to run and when.
Why I Built It This Way
Most IT teams already have automation. Scripts that work, procedures that exist, runbooks that have been tested and trusted. I didn't want S.E.N.T.R.I to require throwing any of that away. The intelligence wraps what you already have.
The security model came from the same place. Giving an AI agent direct access to credentials and script internals would be a hard no for most serious IT environments — and rightfully so. Keeping the AI layer deliberately ignorant of those details, and letting a proven execution engine handle them, felt like the only responsible way to build this.
Try It
S.E.N.T.R.I is live now as part of ServerEngine on the Microsoft Store. Free 14-day trial, all features included.
I'd love to hear from anyone already running agentic AI in IT workflows — what's working, what's not. Drop a comment.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.