DEV Community

xbill for Google Developer Experts

Posted on • Originally published at xbill999.Medium on

MCP Development with COBOL, Cloud Run, and Gemini CLI

Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in COBOL deployed to Google Cloud Run.

Why not just use Python?

Python has traditionally been the main coding language for ML and AI tools. One of the strengths of the MCP protocol is that the actual implementation details are independent of the development language. The reality is that not every project is coded in Python- and MCP allows you to use the latest AI approaches with other coding languages.

COBOL? For AI? Where did you park the DeLorean?

The goal of this article is to provide a minimal viable basic working MCP HTTP server in COBOL that can be run locally without any unneeded extra code or extensions. Next- this HTTP server will be staged and deployed to Google Cloud Run.

So how does this Sorcery Work?

It is theoretically possible to implement the stack needed with only valid COBOL language constructs. The key to this solution is that modern compiled COBOL supports importing bindings from C during the compilation process.

A fully functional C MCP library is here:

GitHub - micl2e2/mcpc: Cross-platform C SDK for Model Context Protocol (MCP), in modern🚀 C23.

What Is COBOL?

COBOL (COmmon Business-Oriented Language) is a high-level, English-like programming language developed in the 1950s, designed for business, finance, and administrative systems, especially on mainframe computers, handling large-scale data processing. Despite its age, it remains crucial for critical infrastructure like banking and government systems, powering applications that process massive transaction volumes, due to its reliability and self-documenting syntax.

There are some legacy sites at IBM, but for the modern compiler version- here is the GNU COBOL site:

GnuCOBOL

Installing COBOL

The step by step instructions vary by platform- for a basic Debian system here are the steps:

sudo apt update
sudo apt install build-essential
sudo apt install gcc
sudo apt install gnucobol4

cobc -V

xbill@penguin:~$ cobc -V
cobc (GnuCOBOL) 4.0-early-dev.0
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Keisuke Nishida, Roger While, Ron Norman, Simon Sobisch, Edward Hart
Built Sep 19 2022 04:28:17
Packaged Jun 06 2020 20:56:36 UTC
C version "12.2.0"
xbill@penguin:~$
Enter fullscreen mode Exit fullscreen mode

Gemini CLI

If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:

npm install -g @google/gemini-cli
Enter fullscreen mode Exit fullscreen mode

Testing the Gemini CLI Environment

Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:

gemini
Enter fullscreen mode Exit fullscreen mode

Node Version Management

Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:

GitHub - nvm-sh/nvm: Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

C MCP Documentation

The official MCP C page provides samples and documentation for getting started:

GitHub - micl2e2/mcpc: Cross-platform C SDK for Model Context Protocol (MCP), in modern🚀 C23.

This C library is built and the low level bindings can be used from COBOL code.

Where do I start?

The strategy for starting MCP development with COBOL is a incremental step by step approach.

First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.

Then, a minimal Hello World Style COBOL MCP Server is built with HTTP transport using the C library bindings. This server is validated with Gemini CLI in the local environment.

This setup validates the connection from Gemini CLI to the local process via MCP. The MCP client (Gemini CLI) and the MCP server both run in the same local environment.

Next- the MCP server is built and deployed to Google Cloud Run. This remote endpoint is validated with a local copy of Gemini CLI.

Setup the Basic Environment

At this point you should have a working COBOL and C environment and a working Gemini CLI installation. The next step is to clone the GitHub samples repository with support scripts:

cd ~
git clone https://github.com/xbill9/gemini-cli-codeassist
Enter fullscreen mode Exit fullscreen mode

Then run init.sh from the cloned directory.

The script will attempt to determine your shell environment and set the correct variables:

cd gemini-cli-codeassist
source init.sh
Enter fullscreen mode Exit fullscreen mode

If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:

cd gemini-cli-codeassist
source set_env.sh
Enter fullscreen mode Exit fullscreen mode

Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.

Hello World with HTTP Transport

One of the key features that the standard MCP libraries provide is abstracting various transport methods.

The high level MCP tool implementation is the same no matter what low level transport channel/method that the MCP Client uses to connect to a MCP Server.

The simplest transport that the SDK supports is the stdio (stdio/stdout) transport — which connects a locally running process. Both the MCP client and MCP Server must be running in the same environment.

The HTTP Transport allows the MCP client and server to be in the same environment or distributed over the Internet.

The connection over HTTP will look similar to this:

       PROCEDURE DIVISION.
       MAIN-LOGIC.
           CALL "mcpc_server_new_tcp" RETURNING server-ptr.

           IF server-ptr = NULL THEN
               CALL "log_info_c" USING BY REFERENCE err-srv-new
               STOP RUN RETURNING 1
           END-IF.
Enter fullscreen mode Exit fullscreen mode

Package Information

The code depends on several standard libraries for MCP and logging:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. greet_cb_impl.
       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01 log-msg PIC X(30) VALUE Z"Executed greet tool".
       01 err-tool-null PIC X(30) VALUE Z"Error: tool-ptr is NULL".
       01 err-msg PIC X(30) VALUE Z"Error retrieving param".
Enter fullscreen mode Exit fullscreen mode

Installing and Running the Code

Run the install make release target on the local system:

xbill@penguin:~/gemini-cli-codeassist/mcp-https-cobol$ make
cc -std=c17 -Wall -Wextra -Imcpc/src -Imcpc -Dis_unix -DMCPC_C23PTCH_KW1 -DMCPC_C23PTCH_CKD1 -DMCPC_C23PTCH_UCHAR1 -DMCPC_C23GIVUP_FIXENUM -O2 -c c_helpers.c
cc -std=c17 -Wall -Wextra -Imcpc/src -Imcpc -Dis_unix -DMCPC_C23PTCH_KW1 -DMCPC_C23PTCH_CKD1 -DMCPC_C23PTCH_UCHAR1 -DMCPC_C23GIVUP_FIXENUM -O2 -c cob_helpers.c
make -C mcpc/src 
make[1]: Entering directory '/home/xbill/gemini-cli-codeassist/mcp-https-cobol/mcpc/src'
cc -Dis_unix -std=c17 -DMCPC_C23PTCH_KW1 -DMCPC_C23PTCH_CKD1 -DMCPC_C23PTCH_UCHAR1 -DMCPC_C23GIVUP_FIXENUM -Wall -Wextra -Werror -Wno-unused-function -Wno-unused-parameter -Wno-unused-label -Wno-error=unused-variable -Wno-error=unused-but-set-variable -O2 -Os -I.. -fPIC alloc.c log.c errcode.c anydata.c tool.c rsc.c prmpt.c server.c retbuf.c ucbr.c complt.c serlz.c mjson.c -c 
ar rcs libmcpc.a alloc.o log.o errcode.o anydata.o tool.o rsc.o prmpt.o server.o retbuf.o ucbr.o complt.o serlz.o mjson.o 
cc -s -o libmcpc.so alloc.o log.o errcode.o anydata.o tool.o rsc.o prmpt.o server.o retbuf.o ucbr.o complt.o serlz.o mjson.o -shared ../src/libmcpc.a 
make[1]: Leaving directory '/home/xbill/gemini-cli-codeassist/mcp-https-cobol/mcpc/src'
cobc -x -free -o server-cobol server.cob c_helpers.o cob_helpers.o mcpc/src/libmcpc.a -lm
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cobol$
Enter fullscreen mode Exit fullscreen mode

To test the code:

xbill@penguin:~/gemini-cli-codeassist/mcp-https-cobol$ make test
python3 test_server_http.py
Starting server on port 8081...
Sending initialize...
Initialize response: {
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2024-11-05",
    "serverInfo": {
      "name": "mcp-https-cobol",
      "version": "0.1"
    },
    "capabilities": {
      "tools": {
        "listChanged": false
      }
    }
  }
}
Sending initialized notification...
Initialized notification status: 200
Sending greet call...
Greet response: {
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Hello, World!"
      }
    ]
  }
}
Test PASSED
Terminating server...
Server Stderr:
{"asctime": "2026-01-22T15:09:36Z", "name": "root", "levelname": "INFO", "message": "Executed greet tool"}
Enter fullscreen mode Exit fullscreen mode

Gemini CLI settings.json

In this example — the source code uses a compiled binary that can be called directly from Gemini CLI.

The default Gemini CLI settings.json has an entry for the source:

 {
    "mcpServers": {
    "mcp-https-cobol": {
      "httpUrl": "http://127.0.0.1:8080"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Start the Local HTTP server

Open a new terminal window, and start the Local copy of the the HTTP MCP server:

xbill@penguin:~/gemini-cli-codeassist/mcp-https-cobol$ ./server-cobol 
{"asctime": "2026-01-22T15:45:07Z", "name": "root", "levelname": "INFO", "message": "Executed greet tool"}
Enter fullscreen mode Exit fullscreen mode

Validation with Gemini CLI

Next- leave the local server running, and open a new terminal window. Start Gemini CLI and the MCP connection over local HTTP to the COBOL Code is validated. The full Gemini CLI Session will start:

> /mcp schema

Configured MCP servers:

🟢 mcp-https-cobol - Ready (2 tools)
  Tools:
  - greet
    Get a greeting from a local http server.
      Parameters:
      {
        "type": "object",
        "properties": {
          "param": {
            "type": "string",
            "title": "param",
            "description": "Greeting parameter"
          }
        }
      }
Enter fullscreen mode Exit fullscreen mode

Then the MCP tool can be called:

> greet IBM 360

✦ I will call the greet tool with the parameter "IBM 360" to get a greeting.

╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ greet (mcp-https-cobol MCP Server) {"param":"IBM 360"} │
│ │
│ Hello, IBM 360! │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ OK. "Hello, IBM 360!"
Enter fullscreen mode Exit fullscreen mode

Deploying to Cloud Run

After the HTTP version of the MCP server has been tested locally — it can be deployed remotely to Google Cloud Run.

First- switch to the directory with the HTTP MCP sample code:

cd ~/gemini-cli-codeassist/mcp-https-cobol
Enter fullscreen mode Exit fullscreen mode

Deploy the project to Google Cloud Run with the pre-built cloudbuild.yaml and Dockerfile:

cd ~/gemini-cli-codeassist/mcp-https-cobol
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cobol$ make deploy
Enter fullscreen mode Exit fullscreen mode

The Cloud Build will start:

echo "Submitting build to Google Cloud Build..."
Submitting build to Google Cloud Build...
gcloud builds submit . --config cloudbuild.yaml
Creating temporary archive of 84 file(s) totalling 369.9 KiB before compression.
Some files were not included in the source upload.

Check the gcloud log [/home/xbill/.config/gcloud/logs/2026.01.22/10.50.16.047795.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).

Uploading tarball of [.] to [gs://comglitn_cloudbuild/source/1769097016.215226-abe89002d03c460e89cab71e0ce03662.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/comglitn/locations/global/builds/15af4392-1b9d-47e1-933d-398faf51aa22].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/15af4392-1b9d-47e1-933d-398faf51aa22?project=1056842563084].
Waiting for build to complete. Polling interval: 1 second(s).
------------------------------------------------------------- REMOTE BUILD OUTPUT --------------------------------------------------------------
starting build "15af4392-1b9d-47e1-933d-398faf51aa22"
Enter fullscreen mode Exit fullscreen mode

The cloud build needs to pull in all the COBOL and C libraries in the build environment and generate the entire package from scratch.

When the build is complete- an endpoint will be returned:

Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #1: Deploying container to Cloud Run service [mcp-https-cobol] in project [comglitn] region [us-central1]
Step #1: Deploying...
Step #1: Setting IAM Policy.........done
Step #1: Creating Revision................................................................done
Step #1: Routing traffic.....done
Step #1: Done.
Step #1: Service [mcp-https-cobol] revision [mcp-https-cobol-00002-njx] has been deployed and is serving 100 percent of traffic.
Step #1: Service URL: https://mcp-https-cobol-1056842563084.us-central1.run.app
Finished Step #1
PUSH │
Enter fullscreen mode Exit fullscreen mode

The service endpoint in this example is :

https://mcp-https-cobol-1056842563084.us-central1.run.app
Enter fullscreen mode Exit fullscreen mode

The actual endpoint will vary based on your project settings.

Review Service in Cloud Run

Navigate to the Google Cloud console and search for Cloud Run -

and then you can detailed information on the Cloud Run Service:

Cloud Logging

The remote server writes logs to stderr in standard JSON format. These logs are available from the deployed Cloud Run Service:

Validate HTTP connection

Once you have the Endpoint — you can attempt a connection- navigate to in your browser:

https://mcp-https-cobol-1056842563084.us-central1.run.app
Enter fullscreen mode Exit fullscreen mode

You will need to adjust the exact URL to match the URL returned from Cloud Build.

You will get an error- this connection is expecting a message in the MCP format:

{"jsonrpc":"2.0","error":{"code":-32700,"message":""}}
Enter fullscreen mode Exit fullscreen mode

Gemini CLI settings.json.cloudrun

Replace the default Gemini CLI configuration file —  settings.json with a pre-configured sample- settings.json.cloudrun to use the Cloud Run version of the connection:

 {
    "mcpServers": {
    "mcp-cloudrun-cobol": {
               "httpUrl": "https://mcp-https-cobol-$PROJECT_NUMBER.us-central1.run.app"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Copy the Cloud Run version of the Gemini CLI configuration file:

xbill@penguin:~/gemini-cli-codeassist/firestore-https-c$ cd .gemini
cp settings.json.cloudrun settings.json
xbill@penguin:~/gemini-cli-codeassist/firestore-https-c/.gemini$
Enter fullscreen mode Exit fullscreen mode

Validation with Gemini CLI

The final connection test uses Gemini CLI as a MCP client with the deployed Cloud Run Service in C providing the MCP server. Startup Gemini CLI with the updated settings :

gemini
> /mcp list

Configured MCP servers:

🟢 mcp-cloudrun-cobol - Ready (2 tools)
  Tools:
  - greet
  - mcpc-info

> greet COBOL in the Cloud!

✦ I will call the greet tool with the parameter "COBOL in the Cloud!".
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ greet (mcp-cloudrun-cobol MCP Server) {"param":"COBOL in the Cloud!"} │
│ │
│ Hello, COBOL in the Cloud!! │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ Hello, COBOL in the Cloud!!
Enter fullscreen mode Exit fullscreen mode

Validation with MCP Inspector

MCP Inspector provides a debugging tool for MCP:

npx @modelcontextprotocol/inspector
Enter fullscreen mode Exit fullscreen mode

The Cloud Run endpoint can be directly accessed from MCP Inspector:

Final Code Review

Gemini CLI was used to validate the project:

> what does this project do

✦ This project implements a Model Context Protocol (MCP) server using COBOL (GnuCOBOL). It allows AI models (like Gemini or Claude) to interact
  with a COBOL-based backend over HTTP.

  Core Functions:
   1. Exposes Tools: It provides a tool named greet that an AI can call. When invoked with a name, the COBOL logic processes the request and
      returns a greeting (e.g., "Hello, World!").
   2. Bridging COBOL and C: Since the MCP library (mcpc) is written in C, the project uses C helper functions and wrappers to safely pass data
      between the COBOL logic and the C-based networking/protocol layer.
   3. HTTP Communication: Unlike many MCP servers that use standard input/output (stdio), this one runs as an HTTP server. This makes it
      suitable for deployment in environments like Google Cloud Run.
   4. Structured Logging: It emits logs in a structured JSON format to stderr, allowing for modern observability and log analysis.

  Technical Stack:
   * GnuCOBOL: For the main application logic (server.cob).
   * C: For the protocol library (mcpc) and low-level glue code (c_helpers.c, cob_helpers.c).
   * Docker & Cloud Build: Ready-to-use configuration for containerizing and deploying the COBOL server to the cloud.
   * Python: Used for automated integration testing to ensure the COBOL server correctly handles JSON-RPC requests.
Enter fullscreen mode Exit fullscreen mode

and code review the project:

✦ Code Review: mcp-https-cobol

  The project is a well-structured and functional implementation of a Model Context Protocol (MCP) server using COBOL and C. It successfully
  bridges the legacy language with modern JSON-RPC protocols via the mcpc library.

  Analysis

  1. Architecture & Design
   * Separation of Concerns: The separation between COBOL business logic (server.cob) and C infrastructure (c_helpers.c, cob_helpers.c) is
     excellent. This keeps the COBOL code focused on the "what" (tools) while C handles the "how" (logging, protocol details).
   * Lifecycle Management: The server lifecycle (init, setup, start, close) is correctly implemented in the MAIN-LOGIC paragraph of server.cob.
   * Safety: The use of STOP RUN on initialization errors ensures the server fails fast if it cannot start correctly.

  2. Implementation Details
   * Argument Handling: The greet_cb_impl correctly allocates a buffer (param-buffer) and passes it to the C library. The use of LOW-VALUES to
     pre-zero the buffer ensures safety.
       * Note: The buffer is fixed at 4096 bytes. This is sufficient for the greet tool but could be a limitation for tools requiring large
         inputs (e.g., file contents).
   * Logging: The log_info_c function implements structured JSON logging to stderr. It correctly handles basic JSON escaping and uses gmtime_r
     for thread-safe timestamp generation.
   * C/COBOL Interop: cob_helpers.c correctly wraps the callback to bridge the function pointer types expected by mcpc and libcob.

  3. Build & Test
   * Makefile: The build system is robust, handling dependencies (mcpc), linting, and different build modes (debug/release).
   * Testing: test_server_http.py provides a good integration test, verifying the full request/response cycle without external dependencies.
   * Verification: make check passed successfully, confirming the code compiles, lints without issues, and passes the integration test.

  Summary
  The codebase is of high quality for a proof-of-concept. The patterns used for C/COBOL interoperability are correct and safe.

  Verdict: ✅ Approved
Enter fullscreen mode Exit fullscreen mode

Summary

MCP development with COBOL using Gemini CLI was validated with an incremental step by step approach.

A minimal streaming HTTP transport MCP Server was built from source code and validated with Gemini CLI running as a MCP client in the same local environment.

Then — the MCP server was wrapped in a container and submitted to Google Cloud Build for deployment to Google Cloud Run. The remote MCP server was validated with a standard browser, and Gemini CLI.

Finally- remote MCP operations were performed from the local Gemini CLI installation to the COBOL MCP server hosted in Google Cloud Run.

This approach can be extended to more complex deployments and Cloud based options.

Top comments (0)