<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jay</title>
    <description>The latest articles on DEV Community by Jay (@jay_krshn_1a9ac493fadf8).</description>
    <link>https://dev.to/jay_krshn_1a9ac493fadf8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jay_krshn_1a9ac493fadf8"/>
    <language>en</language>
    <item>
      <title>I Connected Claude to My IBM i - And It Changed How I Think About Legacy Modernization</title>
      <dc:creator>Jay</dc:creator>
      <pubDate>Thu, 26 Mar 2026 14:34:23 +0000</pubDate>
      <link>https://dev.to/jay_krshn_1a9ac493fadf8/i-connected-claude-to-my-ibm-i-and-it-changed-how-i-think-about-legacy-modernization-19i1</link>
      <guid>https://dev.to/jay_krshn_1a9ac493fadf8/i-connected-claude-to-my-ibm-i-and-it-changed-how-i-think-about-legacy-modernization-19i1</guid>
      <description>&lt;h2&gt;
  
  
  What happens when you give an AI direct access to a system most people have never heard of
&lt;/h2&gt;




&lt;p&gt;There's a weird disconnect in the AI conversation right now. Everyone's talking about coding assistants, AI-powered DevOps, intelligent dashboards — but almost all of it assumes you're running modern cloud infrastructure. Kubernetes, PostgreSQL, GitHub Actions. The usual stack.&lt;/p&gt;

&lt;p&gt;Nobody's talking about what happens when your critical business system runs on IBM i.&lt;/p&gt;

&lt;p&gt;I work with IBM i every day. Warehouses, supply chains, enterprise systems that process millions of transactions and have been running for decades. These systems aren't going anywhere. They're stable, they're fast, and they do exactly what they're supposed to do.&lt;/p&gt;

&lt;p&gt;But they've also been left out of the AI conversation entirely. And I kept wondering — does it have to be that way?&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with IBM i and modern tooling
&lt;/h2&gt;

&lt;p&gt;If you've worked with IBM i, you know the feeling. You're reading about some new tool or platform, nodding along, and then you hit the part where they assume everything lives in a REST API or a cloud database. And you think — okay, that doesn't apply to me.&lt;/p&gt;

&lt;p&gt;The irony is that IBM i systems often hold the most valuable data in an organization. Decades of transaction history. Real-time inventory positions. Production schedules that run 24/7. But getting that data out — or even just asking questions about it — still involves signing into a green screen, navigating menus, running queries manually, and interpreting raw output.&lt;/p&gt;

&lt;p&gt;It's not that the data isn't accessible. Db2 for i is a perfectly capable database. QSYS2 SQL Services have made an incredible amount of system information queryable through standard SQL. The access is there. But the experience of getting to it hasn't kept up with what's happening everywhere else.&lt;/p&gt;




&lt;h2&gt;
  
  
  When I discovered MCP
&lt;/h2&gt;

&lt;p&gt;Earlier this year I came across the Model Context Protocol — MCP for short. It's an open standard that lets AI assistants connect to external tools and data sources. Anthropic published it, and it's gained traction quickly. The idea is simple: you define tools that the AI can call, and the AI figures out when and how to use them based on what you're asking.&lt;/p&gt;

&lt;p&gt;The moment I understood how it worked, the wheels started turning.&lt;/p&gt;

&lt;p&gt;What if I could write a handful of tools — run a SQL query, list active jobs, check system status, browse the IFS — and expose them to Claude through MCP? Not building a chatbot. Not training a model on IBM i documentation. Just giving an AI the ability to reach into the system and pull back real data.&lt;/p&gt;

&lt;p&gt;So that's what I did.&lt;/p&gt;




&lt;h2&gt;
  
  
  How it works (without the complexity you'd expect)
&lt;/h2&gt;

&lt;p&gt;The architecture is embarrassingly simple, which is part of what makes it powerful.&lt;/p&gt;

&lt;p&gt;You write a small Python server that runs on your local machine. This server connects to IBM i through ODBC — the same driver you probably already have installed if you use ACS (IBM i Access Client Solutions). The server defines tools as Python functions, each with a description that tells the AI what it does.&lt;/p&gt;

&lt;p&gt;That's the entire stack. Python on your PC, ODBC to IBM i, MCP to the AI. No changes on the IBM i side. No new programs to deploy. No RPG modifications. No service entries. Nothing.&lt;/p&gt;

&lt;p&gt;The secret sauce, if there is one, is QSYS2 SQL Services. IBM has been quietly building out an incredible set of SQL-accessible system functions over the last several technology refreshes. Active job info, job logs, spool files, IFS statistics, system values, user profiles, message queues, data areas — almost everything you'd normally access through CL commands or green screen menus is now available as SQL table functions.&lt;/p&gt;

&lt;p&gt;This means every tool in the MCP server is just a SQL query. Clean input, structured output. The AI gets JSON back instead of green-screen text, which it can actually interpret and reason about.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it feels like to use
&lt;/h2&gt;

&lt;p&gt;This is the part that genuinely surprised me.&lt;/p&gt;

&lt;p&gt;I expected it to be useful. A faster way to run queries, maybe. A convenience layer. What I didn't expect was how much it would change the way I interact with the system.&lt;/p&gt;

&lt;p&gt;You can ask things like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Show me all libraries that start with PROD and tell me how many tables are in each one."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And it just does it. Runs the query, counts the results, gives you a formatted answer. No navigating to WRKLIB. No typing SQL into STRSQL. Just a question and an answer.&lt;/p&gt;

&lt;p&gt;Or:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"What's the system health looking like right now? Anything I should worry about?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It pulls CPU usage, memory, disk capacity, active job counts — and then interprets them. Not just raw numbers, but context. "CPU is at 23%, well within normal range. Disk usage on ASP 1 is at 71%, which is getting up there — you might want to keep an eye on that."&lt;/p&gt;

&lt;p&gt;The real magic happens when you chain things together in a conversation. You ask about active jobs for a specific user. Something looks odd. You say "check the job log for that one." It knows which job you mean from the previous response. Then you say "has this user had issues before? Check their message queue." And it does.&lt;/p&gt;

&lt;p&gt;That kind of continuity — where context carries forward naturally — is something you can't replicate with traditional tools. Every green screen interaction is stateless. You close the screen, the context is gone. Here, the AI holds onto the thread and builds on it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools that matter most
&lt;/h2&gt;

&lt;p&gt;After using this for a while, I've found that certain tools get used far more than others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL queries&lt;/strong&gt; are the backbone. Being able to say "find all orders from the last 48 hours where the quantity is over 500" and get results instantly — that alone would justify the setup. But when you combine it with the AI's ability to interpret and summarize, it becomes something different. You're not just querying data, you're having a conversation about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System status&lt;/strong&gt; is surprisingly useful. I used to check WRKACTJOB and WRKSYSSTS a few times a day. Now I just ask. And the AI remembers what "normal" looked like from previous checks, so it can flag when something changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job logs&lt;/strong&gt; are where the AI really shines. Reading job logs on IBM i is tedious — they're dense, full of informational messages mixed in with the important stuff. The AI is genuinely good at scanning through a job log and picking out the messages that matter. "There are 47 messages in this job log. Most are routine. But there's a CPF4131 at 14:23 indicating a file member not found, and a follow-up CPD0006 — that's likely your issue."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IFS browsing and file reading&lt;/strong&gt; is the one I didn't expect to use as much as I do. Being able to say "show me what's in /home/myuser/exports and read the most recent CSV" is just faster than navigating the IFS through any other method.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why QSYS2 is the unsung hero
&lt;/h2&gt;

&lt;p&gt;I want to spend a moment on this because I think it's underappreciated.&lt;/p&gt;

&lt;p&gt;IBM has been building QSYS2 SQL Services for years. Every technology refresh adds more. And the beauty of it is that it turns everything on the system into structured, queryable data. You don't need to parse command output. You don't need screen-scraping programs. You just write SQL.&lt;/p&gt;

&lt;p&gt;For this kind of AI integration, that's everything. The AI needs structured data to reason about. Give it a blob of green-screen text and it'll struggle. Give it a JSON array of job records with named fields and it'll do exactly what you want.&lt;/p&gt;

&lt;p&gt;If your shop hasn't explored what's available through QSYS2 lately, it's worth looking. The coverage now is extensive — far beyond what most people realize. It's one of the best things IBM has done for the platform in recent years, and projects like this show why.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means for IBM i shops
&lt;/h2&gt;

&lt;p&gt;I'm not going to oversell this. It's not going to replace your operators. It's not going to automate your entire system. It's a tool.&lt;/p&gt;

&lt;p&gt;But it's a tool that addresses something I've seen at every IBM i shop I've worked with: the knowledge bottleneck.&lt;/p&gt;

&lt;p&gt;There are usually one or two people who really know the system. Who know which libraries matter, what the critical jobs are, where to look when something breaks. When those people are unavailable — or when they eventually leave — that knowledge walks out the door.&lt;/p&gt;

&lt;p&gt;An AI that can query the system, interpret results, and carry context across a conversation doesn't replace that expertise. But it makes it accessible to people who don't have it yet. A junior developer can ask "what are the biggest tables in PRODLIB?" and get an immediate, meaningful answer. A manager can check on system health without learning CL commands. A new team member can explore the system conversationally instead of reading documentation that may or may not be current.&lt;/p&gt;

&lt;p&gt;That accessibility matters. IBM i's biggest challenge has never been capability — it's been the perception that it's impenetrable. Anything that makes it more approachable is a win for the platform's long-term viability.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd build next
&lt;/h2&gt;

&lt;p&gt;This was a starting point. The obvious extensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source member browsing&lt;/strong&gt; — being able to read RPG or CL source through a conversation and have the AI explain what it does. Imagine onboarding new developers who can literally ask the AI "what does this program do?" while looking at the source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authority analysis&lt;/strong&gt; — "who has access to this file?" is a question that takes too long to answer today.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PTF and system maintenance status&lt;/strong&gt; — turning system administration checks into conversations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-referencing&lt;/strong&gt; — "which programs use this file?" by querying object references through SQL Services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The protocol makes this extensible. Adding a new capability is just adding a new function with a description. There's no framework overhead, no deployment complexity. The hardest part is writing good SQL, and if you're on IBM i, you're already doing that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;I started building this because I was curious. I kept building it because it genuinely made me more productive. But what excites me most is what it represents.&lt;/p&gt;

&lt;p&gt;For years, IBM i modernization has been framed as "move off the platform" or "rewrite everything." And both of those approaches are expensive, risky, and often unnecessary. The systems work. The data is valuable. The business logic is proven.&lt;/p&gt;

&lt;p&gt;What MCP shows is that you can bring modern capabilities to IBM i without changing IBM i. You don't need to rewrite your RPG. You don't need to migrate your database. You don't need to replace anything. You just need to build a bridge — a thin layer that translates between what the AI expects and what IBM i provides.&lt;/p&gt;

&lt;p&gt;QSYS2 SQL Services is one half of that bridge. MCP is the other. And the fact that you can connect them with a few hundred lines of Python — no middleware, no platform changes, no vendor contracts — is exactly the kind of pragmatic modernization that actually works in enterprise environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;I'm not naive about the limitations. There are security considerations — you need to be thoughtful about what you expose and to whom. There's the question of audit trails and compliance. And the AI will occasionally get a query wrong, just like any tool.&lt;/p&gt;

&lt;p&gt;But the potential here is real. And it's the kind of potential that doesn't require permission from a steering committee or a six-month project plan. It's a Saturday afternoon experiment that turns into something you use every day.&lt;/p&gt;

&lt;p&gt;If you're in an IBM i shop and you've been wondering where AI fits into your world, this might be the answer. Not a massive transformation initiative. Not a vendor platform. Just a conversation with your system that actually understands what you're asking.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Jaya Krushna Mohapatra is a Warehouse Management Systems Architect focused on enterprise integrations, IBM i modernization, and scalable backend systems.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ibmi</category>
      <category>ai</category>
      <category>modernization</category>
      <category>python</category>
    </item>
    <item>
      <title>I Connected Claude to My IBM i — And It Changed How I Think About Legacy Modernization</title>
      <dc:creator>Jay</dc:creator>
      <pubDate>Thu, 26 Mar 2026 09:05:47 +0000</pubDate>
      <link>https://dev.to/jay_krshn_1a9ac493fadf8/i-connected-claude-to-my-ibm-i-and-it-changed-how-i-think-about-legacy-modernization-24i0</link>
      <guid>https://dev.to/jay_krshn_1a9ac493fadf8/i-connected-claude-to-my-ibm-i-and-it-changed-how-i-think-about-legacy-modernization-24i0</guid>
      <description>&lt;h2&gt;
  
  
  What happens when you give an AI direct access to a system most people have never heard of
&lt;/h2&gt;




&lt;p&gt;There's a weird disconnect in the AI conversation right now. Everyone's talking about coding assistants, AI-powered DevOps, intelligent dashboards — but almost all of it assumes you're running modern cloud infrastructure. Kubernetes, PostgreSQL, GitHub Actions. The usual stack.&lt;/p&gt;

&lt;p&gt;Nobody's talking about what happens when your critical business system runs on IBM i.&lt;/p&gt;

&lt;p&gt;I work with IBM i every day. Warehouses, supply chains, enterprise systems that process millions of transactions and have been running for decades. These systems aren't going anywhere. They're stable, they're fast, and they do exactly what they're supposed to do.&lt;/p&gt;

&lt;p&gt;But they've also been left out of the AI conversation entirely. And I kept wondering — does it have to be that way?&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with IBM i and modern tooling
&lt;/h2&gt;

&lt;p&gt;If you've worked with IBM i, you know the feeling. You're reading about some new tool or platform, nodding along, and then you hit the part where they assume everything lives in a REST API or a cloud database. And you think — okay, that doesn't apply to me.&lt;/p&gt;

&lt;p&gt;The irony is that IBM i systems often hold the most valuable data in an organization. Decades of transaction history. Real-time inventory positions. Production schedules that run 24/7. But getting that data out — or even just asking questions about it — still involves signing into a green screen, navigating menus, running queries manually, and interpreting raw output.&lt;/p&gt;

&lt;p&gt;It's not that the data isn't accessible. Db2 for i is a perfectly capable database. QSYS2 SQL Services have made an incredible amount of system information queryable through standard SQL. The access is there. But the experience of getting to it hasn't kept up with what's happening everywhere else.&lt;/p&gt;




&lt;h2&gt;
  
  
  When I discovered MCP
&lt;/h2&gt;

&lt;p&gt;Earlier this year I came across the Model Context Protocol — MCP for short. It's an open standard that lets AI assistants connect to external tools and data sources. Anthropic published it, and it's gained traction quickly. The idea is simple: you define tools that the AI can call, and the AI figures out when and how to use them based on what you're asking.&lt;/p&gt;

&lt;p&gt;The moment I understood how it worked, the wheels started turning.&lt;/p&gt;

&lt;p&gt;What if I could write a handful of tools — run a SQL query, list active jobs, check system status, browse the IFS — and expose them to Claude through MCP? Not building a chatbot. Not training a model on IBM i documentation. Just giving an AI the ability to reach into the system and pull back real data.&lt;/p&gt;

&lt;p&gt;So that's what I did.&lt;/p&gt;




&lt;h2&gt;
  
  
  How it works (without the complexity you'd expect)
&lt;/h2&gt;

&lt;p&gt;The architecture is embarrassingly simple, which is part of what makes it powerful.&lt;/p&gt;

&lt;p&gt;You write a small Python server that runs on your local machine. This server connects to IBM i through ODBC — the same driver you probably already have installed if you use ACS (IBM i Access Client Solutions). The server defines tools as Python functions, each with a description that tells the AI what it does.&lt;/p&gt;

&lt;p&gt;That's the entire stack. Python on your PC, ODBC to IBM i, MCP to the AI. No changes on the IBM i side. No new programs to deploy. No RPG modifications. No service entries. Nothing.&lt;/p&gt;

&lt;p&gt;The secret sauce, if there is one, is QSYS2 SQL Services. IBM has been quietly building out an incredible set of SQL-accessible system functions over the last several technology refreshes. Active job info, job logs, spool files, IFS statistics, system values, user profiles, message queues, data areas — almost everything you'd normally access through CL commands or green screen menus is now available as SQL table functions.&lt;/p&gt;

&lt;p&gt;This means every tool in the MCP server is just a SQL query. Clean input, structured output. The AI gets JSON back instead of green-screen text, which it can actually interpret and reason about.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it feels like to use
&lt;/h2&gt;

&lt;p&gt;This is the part that genuinely surprised me.&lt;/p&gt;

&lt;p&gt;I expected it to be useful. A faster way to run queries, maybe. A convenience layer. What I didn't expect was how much it would change the way I interact with the system.&lt;/p&gt;

&lt;p&gt;You can ask things like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Show me all libraries that start with PROD and tell me how many tables are in each one."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And it just does it. Runs the query, counts the results, gives you a formatted answer. No navigating to WRKLIB. No typing SQL into STRSQL. Just a question and an answer.&lt;/p&gt;

&lt;p&gt;Or:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"What's the system health looking like right now? Anything I should worry about?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It pulls CPU usage, memory, disk capacity, active job counts — and then interprets them. Not just raw numbers, but context. "CPU is at 23%, well within normal range. Disk usage on ASP 1 is at 71%, which is getting up there — you might want to keep an eye on that."&lt;/p&gt;

&lt;p&gt;The real magic happens when you chain things together in a conversation. You ask about active jobs for a specific user. Something looks odd. You say "check the job log for that one." It knows which job you mean from the previous response. Then you say "has this user had issues before? Check their message queue." And it does.&lt;/p&gt;

&lt;p&gt;That kind of continuity — where context carries forward naturally — is something you can't replicate with traditional tools. Every green screen interaction is stateless. You close the screen, the context is gone. Here, the AI holds onto the thread and builds on it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools that matter most
&lt;/h2&gt;

&lt;p&gt;After using this for a while, I've found that certain tools get used far more than others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL queries&lt;/strong&gt; are the backbone. Being able to say "find all orders from the last 48 hours where the quantity is over 500" and get results instantly — that alone would justify the setup. But when you combine it with the AI's ability to interpret and summarize, it becomes something different. You're not just querying data, you're having a conversation about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System status&lt;/strong&gt; is surprisingly useful. I used to check WRKACTJOB and WRKSYSSTS a few times a day. Now I just ask. And the AI remembers what "normal" looked like from previous checks, so it can flag when something changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job logs&lt;/strong&gt; are where the AI really shines. Reading job logs on IBM i is tedious — they're dense, full of informational messages mixed in with the important stuff. The AI is genuinely good at scanning through a job log and picking out the messages that matter. "There are 47 messages in this job log. Most are routine. But there's a CPF4131 at 14:23 indicating a file member not found, and a follow-up CPD0006 — that's likely your issue."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IFS browsing and file reading&lt;/strong&gt; is the one I didn't expect to use as much as I do. Being able to say "show me what's in /home/myuser/exports and read the most recent CSV" is just faster than navigating the IFS through any other method.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why QSYS2 is the unsung hero
&lt;/h2&gt;

&lt;p&gt;I want to spend a moment on this because I think it's underappreciated.&lt;/p&gt;

&lt;p&gt;IBM has been building QSYS2 SQL Services for years. Every technology refresh adds more. And the beauty of it is that it turns everything on the system into structured, queryable data. You don't need to parse command output. You don't need screen-scraping programs. You just write SQL.&lt;/p&gt;

&lt;p&gt;For this kind of AI integration, that's everything. The AI needs structured data to reason about. Give it a blob of green-screen text and it'll struggle. Give it a JSON array of job records with named fields and it'll do exactly what you want.&lt;/p&gt;

&lt;p&gt;If your shop hasn't explored what's available through QSYS2 lately, it's worth looking. The coverage now is extensive — far beyond what most people realize. It's one of the best things IBM has done for the platform in recent years, and projects like this show why.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means for IBM i shops
&lt;/h2&gt;

&lt;p&gt;I'm not going to oversell this. It's not going to replace your operators. It's not going to automate your entire system. It's a tool.&lt;/p&gt;

&lt;p&gt;But it's a tool that addresses something I've seen at every IBM i shop I've worked with: the knowledge bottleneck.&lt;/p&gt;

&lt;p&gt;There are usually one or two people who really know the system. Who know which libraries matter, what the critical jobs are, where to look when something breaks. When those people are unavailable — or when they eventually leave — that knowledge walks out the door.&lt;/p&gt;

&lt;p&gt;An AI that can query the system, interpret results, and carry context across a conversation doesn't replace that expertise. But it makes it accessible to people who don't have it yet. A junior developer can ask "what are the biggest tables in PRODLIB?" and get an immediate, meaningful answer. A manager can check on system health without learning CL commands. A new team member can explore the system conversationally instead of reading documentation that may or may not be current.&lt;/p&gt;

&lt;p&gt;That accessibility matters. IBM i's biggest challenge has never been capability — it's been the perception that it's impenetrable. Anything that makes it more approachable is a win for the platform's long-term viability.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd build next
&lt;/h2&gt;

&lt;p&gt;This was a starting point. The obvious extensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source member browsing&lt;/strong&gt; — being able to read RPG or CL source through a conversation and have the AI explain what it does. Imagine onboarding new developers who can literally ask the AI "what does this program do?" while looking at the source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authority analysis&lt;/strong&gt; — "who has access to this file?" is a question that takes too long to answer today.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PTF and system maintenance status&lt;/strong&gt; — turning system administration checks into conversations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-referencing&lt;/strong&gt; — "which programs use this file?" by querying object references through SQL Services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The protocol makes this extensible. Adding a new capability is just adding a new function with a description. There's no framework overhead, no deployment complexity. The hardest part is writing good SQL, and if you're on IBM i, you're already doing that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;I started building this because I was curious. I kept building it because it genuinely made me more productive. But what excites me most is what it represents.&lt;/p&gt;

&lt;p&gt;For years, IBM i modernization has been framed as "move off the platform" or "rewrite everything." And both of those approaches are expensive, risky, and often unnecessary. The systems work. The data is valuable. The business logic is proven.&lt;/p&gt;

&lt;p&gt;What MCP shows is that you can bring modern capabilities to IBM i without changing IBM i. You don't need to rewrite your RPG. You don't need to migrate your database. You don't need to replace anything. You just need to build a bridge — a thin layer that translates between what the AI expects and what IBM i provides.&lt;/p&gt;

&lt;p&gt;QSYS2 SQL Services is one half of that bridge. MCP is the other. And the fact that you can connect them with a few hundred lines of Python — no middleware, no platform changes, no vendor contracts — is exactly the kind of pragmatic modernization that actually works in enterprise environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;I'm not naive about the limitations. There are security considerations — you need to be thoughtful about what you expose and to whom. There's the question of audit trails and compliance. And the AI will occasionally get a query wrong, just like any tool.&lt;/p&gt;

&lt;p&gt;But the potential here is real. And it's the kind of potential that doesn't require permission from a steering committee or a six-month project plan. It's a Saturday afternoon experiment that turns into something you use every day.&lt;/p&gt;

&lt;p&gt;If you're in an IBM i shop and you've been wondering where AI fits into your world, this might be the answer. Not a massive transformation initiative. Not a vendor platform. Just a conversation with your system that actually understands what you're asking.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Jaya Krushna Mohapatra is a Warehouse Management Systems Architect focused on enterprise integrations, IBM i modernization, and scalable backend systems.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>as400</category>
      <category>ibmi</category>
      <category>mcp</category>
      <category>python</category>
    </item>
    <item>
      <title>When HTTPAPI Fails: A Practical SOAP Integration Workaround on IBM i</title>
      <dc:creator>Jay</dc:creator>
      <pubDate>Thu, 19 Mar 2026 04:25:43 +0000</pubDate>
      <link>https://dev.to/jay_krshn_1a9ac493fadf8/when-httpapi-fails-a-practical-soap-integration-workaround-on-ibm-i-58kb</link>
      <guid>https://dev.to/jay_krshn_1a9ac493fadf8/when-httpapi-fails-a-practical-soap-integration-workaround-on-ibm-i-58kb</guid>
      <description>&lt;p&gt;&lt;em&gt;A real-world approach to handling complex SOAP APIs when traditional IBM i tools fall short&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I've been doing integrations on IBM i for a while now. Most of the time, HTTPAPI handles everything I throw at it — REST, SOAP, whatever. It just works.&lt;/p&gt;

&lt;p&gt;But a few months ago, I hit a wall with a SOAP API that refused to play nice. And honestly, it took me longer than I'd like to admit to figure out why.&lt;/p&gt;

&lt;p&gt;This is what happened and how I ended up solving it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What went wrong
&lt;/h2&gt;

&lt;p&gt;The setup was straightforward. Call a vendor's SOAP endpoint, send a request, get a response, process it in RPG. Standard stuff.&lt;/p&gt;

&lt;p&gt;Except the response kept coming back wrong.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection was fine&lt;/li&gt;
&lt;li&gt;HTTP 200 every time&lt;/li&gt;
&lt;li&gt;But the response body was either empty or garbage&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ In some cases, the request returns HTTP 200 but the response isn’t usable. In other cases, the request may not return a proper response at all—especially when working with older SOAP-based APIs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  When You Don’t Even Get a Proper Response
&lt;/h2&gt;

&lt;p&gt;In some scenarios, the issue goes beyond receiving an unusable response.&lt;/p&gt;

&lt;p&gt;With certain SOAP services—especially older or more rigid implementations—the request may fail before returning a meaningful HTTP response at all. This can show up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection resets
&lt;/li&gt;
&lt;li&gt;Timeouts
&lt;/li&gt;
&lt;li&gt;SSL handshake failures
&lt;/li&gt;
&lt;li&gt;Inconsistent or empty responses
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These situations are often harder to debug because there’s very little feedback to work with. The request appears to fail silently, and it’s not always clear whether the issue is with the request structure, the transport layer, or compatibility between systems.&lt;/p&gt;

&lt;p&gt;I spent a solid chunk of time convinced it was my SOAP envelope. Then I thought it was a namespace issue. Then maybe the headers. I kept tweaking things, and nothing changed.&lt;/p&gt;

&lt;p&gt;The frustrating part? When I tested the exact same request in Postman, it worked perfectly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The debugging spiral
&lt;/h2&gt;

&lt;p&gt;I went through all the usual steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rebuilt the SOAP envelope from scratch&lt;/li&gt;
&lt;li&gt;Double-checked every namespace and header&lt;/li&gt;
&lt;li&gt;Verified SSL certificates were in place&lt;/li&gt;
&lt;li&gt;Compared raw payloads character by character&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything matched. The request was identical. But HTTPAPI kept giving me bad responses while Postman returned clean data every time.&lt;/p&gt;

&lt;p&gt;At some point I had to accept that the problem wasn't my request — it was somewhere in how the request was being sent.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stepping back
&lt;/h2&gt;

&lt;p&gt;Once I stopped trying to force HTTPAPI to work, the answer became pretty obvious.&lt;/p&gt;

&lt;p&gt;The vendor's SOAP implementation was strict about certain HTTP behaviors — things like exact header ordering, specific TLS negotiation patterns, and chunked transfer encoding. HTTPAPI handles most of this fine for typical APIs, but this particular endpoint was picky in ways that were hard to control from RPG.&lt;/p&gt;

&lt;p&gt;So I asked myself: what if I just let something else handle the HTTP part?&lt;/p&gt;




&lt;h2&gt;
  
  
  The fix: Java as a middle layer
&lt;/h2&gt;

&lt;p&gt;I wrote a small Java program. Nothing fancy — maybe 80 lines. Its only job is to make the SOAP call and write the response to a file.&lt;/p&gt;

&lt;p&gt;The flow looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RPG / CL
   ↓
Parameter file on the IFS
   ↓
Java program (handles the SOAP call)
   ↓
Vendor API
   ↓
Response file on the IFS
   ↓
Back to RPG for processing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RPG still runs the show. It writes out the parameters, kicks off the Java program, waits for the response, and processes it. The Java piece is just a bridge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Java specifically
&lt;/h2&gt;

&lt;p&gt;I've seen people suggest Python or Node for this kind of thing, and those would work too. I went with Java because it's already on every IBM i, no extra setup needed.&lt;/p&gt;

&lt;p&gt;Java also gave me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predictable TLS behavior (the JVM handles negotiation well)&lt;/li&gt;
&lt;li&gt;Full control over HTTP headers, including ordering&lt;/li&gt;
&lt;li&gt;Proper chunked encoding support&lt;/li&gt;
&lt;li&gt;Stack traces when things fail — which beats staring at a job log&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first time I ran it, the response came back clean. Same request that had been failing for days.&lt;/p&gt;




&lt;h2&gt;
  
  
  Keeping it reusable
&lt;/h2&gt;

&lt;p&gt;I didn't want to hardcode anything, so the Java program reads from a parameter file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENDPOINT=https://api.vendor.com/soap
SOAP_ACTION=SomeAction
PAYLOAD=&amp;lt;Soap:Envelope&amp;gt;...&amp;lt;/Soap:Envelope&amp;gt;
OUTPUT=/home/files/response.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Different API? Just change the file. The Java program doesn't care what it's calling.&lt;/p&gt;

&lt;p&gt;The response gets written in a simple format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HTTP Response Code: 200
Response Body:
&amp;lt;Soap:Envelope&amp;gt;...&amp;lt;/Soap:Envelope&amp;gt;
STATUS: SUCCESS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RPG reads the status, grabs the body if it's good, and moves on. Clean and predictable.&lt;/p&gt;

&lt;h2&gt;
  
  
  This was part of what led me to try a different approach.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;p&gt;Looking back, I probably should have switched approaches sooner instead of spending so much time debugging HTTPAPI. The signs were there — identical requests working in other tools but failing from RPG.&lt;/p&gt;

&lt;p&gt;I've since used this same pattern for two other integrations that had similar quirks. It's become a go-to option when the standard approach doesn't cooperate.&lt;/p&gt;




&lt;h2&gt;
  
  
  When this makes sense
&lt;/h2&gt;

&lt;p&gt;I'm not saying stop using HTTPAPI. For most APIs, it's still my first choice.&lt;/p&gt;

&lt;p&gt;But if you're dealing with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SOAP endpoints that are strict about HTTP behavior&lt;/li&gt;
&lt;li&gt;Responses that work everywhere except from IBM i&lt;/li&gt;
&lt;li&gt;TLS issues you can't pin down&lt;/li&gt;
&lt;li&gt;Debugging that's hit a dead end&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...it's worth considering a hybrid approach. Sometimes the best thing you can do is let each tool handle what it's best at.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;This wasn't a glamorous fix. No new framework, no architectural overhaul. Just a small Java program sitting between RPG and an API that wouldn't behave.&lt;/p&gt;

&lt;p&gt;But it solved a problem that had been eating up my time, and it's been solid since.&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Note on Java Setup&lt;/strong&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure Java is properly available on your IBM i system.&lt;br&gt;&lt;br&gt;
A quick check is to run the &lt;code&gt;JAVA&lt;/code&gt; command from a CL prompt or QP2TERM to confirm it executes successfully.&lt;br&gt;&lt;br&gt;
Also ensure your classpath and IFS paths are correctly set, as misconfiguration here can quietly cause failures.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you've hit similar walls with SOAP on IBM i, I'd be curious to hear how you handled it. There's probably a dozen different ways to approach this — this just happened to be the one that worked for me.&lt;/p&gt;

</description>
      <category>ibmi</category>
      <category>java</category>
      <category>api</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
