What happens when you give an AI direct access to a system most people have never heard of
There's a weird disconnect in the AI conversation right now. Everyone's talking about coding assistants, AI-powered DevOps, intelligent dashboards — but almost all of it assumes you're running modern cloud infrastructure. Kubernetes, PostgreSQL, GitHub Actions. The usual stack.
Nobody's talking about what happens when your critical business system runs on IBM i.
I work with IBM i every day. Warehouses, supply chains, enterprise systems that process millions of transactions and have been running for decades. These systems aren't going anywhere. They're stable, they're fast, and they do exactly what they're supposed to do.
But they've also been left out of the AI conversation entirely. And I kept wondering — does it have to be that way?
The problem with IBM i and modern tooling
If you've worked with IBM i, you know the feeling. You're reading about some new tool or platform, nodding along, and then you hit the part where they assume everything lives in a REST API or a cloud database. And you think — okay, that doesn't apply to me.
The irony is that IBM i systems often hold the most valuable data in an organization. Decades of transaction history. Real-time inventory positions. Production schedules that run 24/7. But getting that data out — or even just asking questions about it — still involves signing into a green screen, navigating menus, running queries manually, and interpreting raw output.
It's not that the data isn't accessible. Db2 for i is a perfectly capable database. QSYS2 SQL Services have made an incredible amount of system information queryable through standard SQL. The access is there. But the experience of getting to it hasn't kept up with what's happening everywhere else.
When I discovered MCP
Earlier this year I came across the Model Context Protocol — MCP for short. It's an open standard that lets AI assistants connect to external tools and data sources. Anthropic published it, and it's gained traction quickly. The idea is simple: you define tools that the AI can call, and the AI figures out when and how to use them based on what you're asking.
The moment I understood how it worked, the wheels started turning.
What if I could write a handful of tools — run a SQL query, list active jobs, check system status, browse the IFS — and expose them to Claude through MCP? Not building a chatbot. Not training a model on IBM i documentation. Just giving an AI the ability to reach into the system and pull back real data.
So that's what I did.
How it works (without the complexity you'd expect)
The architecture is embarrassingly simple, which is part of what makes it powerful.
You write a small Python server that runs on your local machine. This server connects to IBM i through ODBC — the same driver you probably already have installed if you use ACS (IBM i Access Client Solutions). The server defines tools as Python functions, each with a description that tells the AI what it does.
That's the entire stack. Python on your PC, ODBC to IBM i, MCP to the AI. No changes on the IBM i side. No new programs to deploy. No RPG modifications. No service entries. Nothing.
The secret sauce, if there is one, is QSYS2 SQL Services. IBM has been quietly building out an incredible set of SQL-accessible system functions over the last several technology refreshes. Active job info, job logs, spool files, IFS statistics, system values, user profiles, message queues, data areas — almost everything you'd normally access through CL commands or green screen menus is now available as SQL table functions.
This means every tool in the MCP server is just a SQL query. Clean input, structured output. The AI gets JSON back instead of green-screen text, which it can actually interpret and reason about.
What it feels like to use
This is the part that genuinely surprised me.
I expected it to be useful. A faster way to run queries, maybe. A convenience layer. What I didn't expect was how much it would change the way I interact with the system.
You can ask things like:
"Show me all libraries that start with PROD and tell me how many tables are in each one."
And it just does it. Runs the query, counts the results, gives you a formatted answer. No navigating to WRKLIB. No typing SQL into STRSQL. Just a question and an answer.
Or:
"What's the system health looking like right now? Anything I should worry about?"
It pulls CPU usage, memory, disk capacity, active job counts — and then interprets them. Not just raw numbers, but context. "CPU is at 23%, well within normal range. Disk usage on ASP 1 is at 71%, which is getting up there — you might want to keep an eye on that."
The real magic happens when you chain things together in a conversation. You ask about active jobs for a specific user. Something looks odd. You say "check the job log for that one." It knows which job you mean from the previous response. Then you say "has this user had issues before? Check their message queue." And it does.
That kind of continuity — where context carries forward naturally — is something you can't replicate with traditional tools. Every green screen interaction is stateless. You close the screen, the context is gone. Here, the AI holds onto the thread and builds on it.
The tools that matter most
After using this for a while, I've found that certain tools get used far more than others.
SQL queries are the backbone. Being able to say "find all orders from the last 48 hours where the quantity is over 500" and get results instantly — that alone would justify the setup. But when you combine it with the AI's ability to interpret and summarize, it becomes something different. You're not just querying data, you're having a conversation about it.
System status is surprisingly useful. I used to check WRKACTJOB and WRKSYSSTS a few times a day. Now I just ask. And the AI remembers what "normal" looked like from previous checks, so it can flag when something changes.
Job logs are where the AI really shines. Reading job logs on IBM i is tedious — they're dense, full of informational messages mixed in with the important stuff. The AI is genuinely good at scanning through a job log and picking out the messages that matter. "There are 47 messages in this job log. Most are routine. But there's a CPF4131 at 14:23 indicating a file member not found, and a follow-up CPD0006 — that's likely your issue."
IFS browsing and file reading is the one I didn't expect to use as much as I do. Being able to say "show me what's in /home/myuser/exports and read the most recent CSV" is just faster than navigating the IFS through any other method.
Why QSYS2 is the unsung hero
I want to spend a moment on this because I think it's underappreciated.
IBM has been building QSYS2 SQL Services for years. Every technology refresh adds more. And the beauty of it is that it turns everything on the system into structured, queryable data. You don't need to parse command output. You don't need screen-scraping programs. You just write SQL.
For this kind of AI integration, that's everything. The AI needs structured data to reason about. Give it a blob of green-screen text and it'll struggle. Give it a JSON array of job records with named fields and it'll do exactly what you want.
If your shop hasn't explored what's available through QSYS2 lately, it's worth looking. The coverage now is extensive — far beyond what most people realize. It's one of the best things IBM has done for the platform in recent years, and projects like this show why.
What this means for IBM i shops
I'm not going to oversell this. It's not going to replace your operators. It's not going to automate your entire system. It's a tool.
But it's a tool that addresses something I've seen at every IBM i shop I've worked with: the knowledge bottleneck.
There are usually one or two people who really know the system. Who know which libraries matter, what the critical jobs are, where to look when something breaks. When those people are unavailable — or when they eventually leave — that knowledge walks out the door.
An AI that can query the system, interpret results, and carry context across a conversation doesn't replace that expertise. But it makes it accessible to people who don't have it yet. A junior developer can ask "what are the biggest tables in PRODLIB?" and get an immediate, meaningful answer. A manager can check on system health without learning CL commands. A new team member can explore the system conversationally instead of reading documentation that may or may not be current.
That accessibility matters. IBM i's biggest challenge has never been capability — it's been the perception that it's impenetrable. Anything that makes it more approachable is a win for the platform's long-term viability.
What I'd build next
This was a starting point. The obvious extensions:
- Source member browsing — being able to read RPG or CL source through a conversation and have the AI explain what it does. Imagine onboarding new developers who can literally ask the AI "what does this program do?" while looking at the source.
- Authority analysis — "who has access to this file?" is a question that takes too long to answer today.
- PTF and system maintenance status — turning system administration checks into conversations.
- Cross-referencing — "which programs use this file?" by querying object references through SQL Services.
The protocol makes this extensible. Adding a new capability is just adding a new function with a description. There's no framework overhead, no deployment complexity. The hardest part is writing good SQL, and if you're on IBM i, you're already doing that.
The bigger picture
I started building this because I was curious. I kept building it because it genuinely made me more productive. But what excites me most is what it represents.
For years, IBM i modernization has been framed as "move off the platform" or "rewrite everything." And both of those approaches are expensive, risky, and often unnecessary. The systems work. The data is valuable. The business logic is proven.
What MCP shows is that you can bring modern capabilities to IBM i without changing IBM i. You don't need to rewrite your RPG. You don't need to migrate your database. You don't need to replace anything. You just need to build a bridge — a thin layer that translates between what the AI expects and what IBM i provides.
QSYS2 SQL Services is one half of that bridge. MCP is the other. And the fact that you can connect them with a few hundred lines of Python — no middleware, no platform changes, no vendor contracts — is exactly the kind of pragmatic modernization that actually works in enterprise environments.
Wrapping up
I'm not naive about the limitations. There are security considerations — you need to be thoughtful about what you expose and to whom. There's the question of audit trails and compliance. And the AI will occasionally get a query wrong, just like any tool.
But the potential here is real. And it's the kind of potential that doesn't require permission from a steering committee or a six-month project plan. It's a Saturday afternoon experiment that turns into something you use every day.
If you're in an IBM i shop and you've been wondering where AI fits into your world, this might be the answer. Not a massive transformation initiative. Not a vendor platform. Just a conversation with your system that actually understands what you're asking.
Jaya Krushna Mohapatra is a Warehouse Management Systems Architect focused on enterprise integrations, IBM i modernization, and scalable backend systems.
Top comments (0)