Leveraging Gemini CLI and the underlying Gemini LLM to deploy Model Context Protocol (MCP) AI applications built with C++ to Google Cloud Run.
Why not just use Python?
Python has traditionally been the main coding language for ML and AI tools. One of the strengths of the MCP protocol is that the actual implementation details are independent of the development language. The reality is that not every project is coded in Python- and MCP allows you to use the latest AI approaches with other coding languages.
C++? Doesn’t that leak Memory and Crash?
The goal of this article is to provide a minimal viable basic working MCP stdio server in C++ that can be run locally without any unneeded extra code or extensions.
There are a few options for C++ MCP libraries. The C++ MCP library used for the demo is here:
GitHub - hkr04/cpp-mcp: Lightweight C++ MCP (Model Context Protocol) SDK
What Is C++?
C++ is a powerful, high-performance, general-purpose programming language that supports multiple programming paradigms, including object-oriented , procedural , and generic programming. Developed by Bjarne Stroustrup in the early 1980s as an extension of the C language (initially called “C with Classes”), C++ gives developers a high level of control over system resources and memory.
There are several C++ sites but this page is a good starter:
Installing C++
The step by step instructions vary by platform- for a basic Debian system here are the steps:
sudo apt update
sudo apt install build-essential
sudo apt-get install cmake
sudo apt-get install clang-format
g++ --version
xbill@penguin:~$ g++ --version
g++ (Debian 12.2.0-14+deb12u1) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
xbill@penguin:~$
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
gemini
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
C++ MCP Documentation
The MCP C++ page provides samples and documentation for getting started:
cpp-mcp/examples/server_example.cpp at main · hkr04/cpp-mcp
Where do I start?
The strategy for starting MCP development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, a minimal Hello World Style C++ MCP Server is built with HTTP transport. This server is validated with Gemini CLI in the local environment.
This setup validates the connection from Gemini CLI to the local process via MCP. The MCP client (Gemini CLI) and the MCP server both run in the same local environment.
Next- the basic MCP server is deployed to Google Cloud Run. This cloud deployment is validated with a local copy of Gemini CLI running as a MCP client.
Setup the Basic Environment
At this point you should have a working C++ environment and a working Gemini CLI installation. The next step is to clone the GitHub samples repository with support scripts:
cd ~
git clone https://github.com/xbill9/gemini-cli-codeassist
Then run init.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
cd gemini-cli-codeassist
source init.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
cd gemini-cli-codeassist
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Hello World with HTTP Transport
One of the key features that the standard MCP libraries provide is abstracting various transport methods.
The high level MCP tool implementation is the same no matter what low level transport channel/method that the MCP Client uses to connect to a MCP Server.
The simplest transport that the SDK supports is the stdio (stdio/stdout) transport — which connects a locally running process. Both the MCP client and MCP Server must be running in the same environment.
The HTTP transport allows the client and server to run locally or distributed over the Internet.
The connection over HTTP will look similar to this:
// Create and configure server
mcp::server::configuration srv_conf;
srv_conf.host = "0.0.0.0";
srv_conf.port = port;
mcp::server server(srv_conf);
server.set_server_info("mcp-https-cplus", "1.0.0");
C++ Package Information
The code depends on several standard C libraries for MCP and logging:
#include <chrono>
#include <ctime>
#include <filesystem>
#include <functional>
#include <iomanip>
#include <iostream>
#include <map>
#include <mutex>
#include <sstream>
#include <string>
#include <string_view>
#include <vector>
#include "mcp_server.h"
#include "mcp_tool.h"
Installing and Running the C++ Code
Run the install make release target on the local system:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make
Pulling main repository changes...
git pull origin main
From https://github.com/xbill9/gemini-cli-codeassist
* branch main -> FETCH_HEAD
Already up to date.
Updating all submodules to the latest remote versions...
git submodule update --init --recursive --remote
c++ -std=c++17 -Wall -Wextra -Icpp-mcp/include -Icpp-mcp/common -O2 -c main.cpp
mkdir -p cpp-mcp/build
cd cpp-mcp/build && cmake .. && make mcp
-- The CXX compiler identification is GNU 12.2.0
-- Detecting CXX compiler ABI info
To lint the code:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make lint
clang-format --dry-run --Werror main.cpp
To test the code:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make test
python3 test_server.py
Testing 'greet' tool over HTTP/SSE...
✓ Connected to SSE, message endpoint: /message?session_id=a348a2a7-aa79-0d3c-e363-cf9609d2dc59
✓ initialize successful
✓ notifications/initialized sent
✓ tools/list successful
✓ tools/call (greet) successful
All tests passed!
Gemini CLI settings.json
In this example — the C source code uses a compiled binary that can be called directly from Gemini CLI.
The default Gemini CLI settings.json has an entry for the source:
{
"mcpServers": {
"mcp-https-cplus": {
"url": "http://127.0.0.1:8080/sse"
}
}
}
Start the Local MCP Server
Open a terminal window and kick off the local MCP server:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make run
./server
{"timestamp":"2026-01-17T17:03:57Z","level":"INFO","message":"Starting MCP HTTP server","data":{"host":"0.0.0.0","port":8080}}
2026-01-17 12:03:57 [INFO] Starting MCP server on 0.0.0.0:8080
2026-01-17 12:03:57 [INFO] Starting server in blocking mode
2026-01-17 12:04:32 [INFO] 127.0.0.1:53864 - "GET /sse HTTP/1.1" -1
2026-01-17 12:04:33 [INFO] 127.0.0.1:53870 - "POST /message HTTP/1.1" 202
2026-01-17 12:04:33 [INFO] Processing method call: initialize
2026-01-17 12:04:33 [INFO] Client requested protocol version: 2025-11-25
2026-01-17 12:04:33 [INFO] Client connected: gemini-cli-mcp-client 0.0.1
2026-01-17 12:04:33 [INFO] Initialization successful, waiting for notifications/initialized notification
2026-01-17 12:04:33 [INFO] 127.0.0.1:53870 - "POST /message HTTP/1.1" 202
2026-01-17 12:04:33 [INFO] 127.0.0.1:53878 - "POST /message HTTP/1.1" 202
2026-01-17 12:04:33 [INFO] Processing method call: tools/list
2026-01-17 12:04:33 [INFO] Calling method handler: tools/list
2026-01-17 12:04:33 [INFO] Method call successful: tools/list
Validation with Gemini CLI
Next- open another window and start Gemini CLI. The local MCP connection over HTTP to the C++ Code is validated and the full Gemini CLI session will start:
> call mcp greet buffer overflow!
✦ I will call the greet tool with the parameter "buffer overflow!".
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ? greet (mcp-https-cplus MCP Server) {"param":"buffer overflow!"} ← │
│ │
│ MCP Server: mcp-https-cplus │
│ Tool: greet │
│ │
│ Allow execution of MCP tool "greet" from server "mcp-https-cplus"? │
│ │
│ 1. Allow once │
│ 2. Allow tool for this session │
│ ● 3. Allow all server tools for this session │
│ 4. No, suggest changes (esc) │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ greet (mcp-https-cplus MCP Server) {"param":"buffer overflow!"} │
│ │
│ buffer overflow! │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Deploying to Cloud Run
After the HTTP version of the MCP server has been tested locally — it can be deployed remotely to Google Cloud Run.
First- switch to the directory with the HTTP MCP sample code:
cd ~/gemini-cli-codeassist/mcp-https-cplus
Deploy the project to Google Cloud Run with the pre-built cloudbuild.yaml and Dockerfile:
cd ~/gemini-cli-codeassist/mcp-https-cplus
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make deploy
The Cloud Build will start:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ make deploy
Submitting build to Google Cloud Build...
Creating temporary archive of 290 file(s) totalling 5.7 MiB before compression.
Uploading tarball of [.] to [gs://comglitn_cloudbuild/source/1768670570.011261-f8f1037fb7394964800a064b0d5afb03.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/comglitn/locations/global/builds/62a3353a-0536-40ab-8ba2-c212f968cec0].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/62a3353a-0536-40ab-8ba2-c212f968cec0?project=1056842563084].
Waiting for build to complete. Polling interval: 1 second(s).
----------------------------------------------- REMOTE BUILD OUTPUT ------------------------------------------------
starting build "62a3353a-0536-40ab-8ba2-c212f968cec0"
It can take 15–30 minutes to complete the build.
The cloud build needs to pull in all C++ libraries in the build environment and generate the entire package from scratch.
When the build is complete- an endpoint will be returned:
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #1: Deploying container to Cloud Run service [mcp-https-cplus] in project [comglitn] region [us-central1]
Step #1: Deploying...
Step #1: Setting IAM Policy..............done
Step #1: Creating Revision....................................................done
Step #1: Routing traffic.....done
Step #1: Done.
Step #1: Service [mcp-https-cplus] revision [mcp-https-cplus-00007-h4p] has been deployed and is serving 100 percent of traffic.
Step #1: Service URL: https://mcp-https-cplus-1056842563084.us-central1.run.app
Finished Step #1 │
The service endpoint in this example is :
https://mcp-https-cplus-1056842563084.us-central1.run.app
The actual endpoint will vary based on your project settings.
Review Service in Cloud Run
Navigate to the Google Cloud console and search for Cloud Run -
and then you can detailed information on the Cloud Run Service:
Cloud Logging
The remote server writes logs to stderr in standard JSON format. These logs are available from the deployed Cloud Run Service:
Validate HTTP connection
Once you have the Endpoint — you can attempt a connection- navigate to in your browser:
https://mcp-https-cplus-1056842563084.us-central1.run.app
You will need to adjust the exact URL to match the URL returned from Cloud Build.
You will get an error- this connection is expecting a message in the MCP format:
Page Not Found
Gemini CLI settings.json.cloudrun
Replace the default Gemini CLI configuration file — settings.json with a pre-configured sample- settings.json.cloudrun to use the Cloud Run version of the connection:
{
"mcpServers": {
"hello-cloudrun-cplus": {
"url": "https://mcp-https-cplus-$PROJECT_NUMBER.us-central1.run.app/sse"
}
}
}
Copy the Cloud Run version of the Gemini CLI configuration file:
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus$ cd .gemini
cp settings.json.cloudrun settings.json
xbill@penguin:~/gemini-cli-codeassist/mcp-https-cplus/.gemini$
Validation with Gemini CLI
The final connection test uses Gemini CLI as a MCP client with the deployed Cloud Run Service in C providing the MCP server. Startup Gemini CLI with the updated settings :
gemini
/mcp list
hello-cloudrun-cplus - Ready (1 tool)
Tools:
- greet
> greet Cloud Run NULL C++ pointer!
✦ I will call the greet tool with the requested parameter.
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ greet (hello-cloudrun-cplus MCP Server) {"param":"Cloud Run NULL C++ pointer!"} │
│ │
│ Cloud Run NULL C++ pointer! │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The greet tool returned: "Cloud Run NULL C++ pointer!"
Summary
MCP development with C++ using Gemini CLI was validated with an incremental step by step approach.
A minimal streaming HTTP transport MCP Server was built from source code and validated with Gemini CLI running as a MCP client in the same local environment.
Then — the MCP server was wrapped in a container and submitted to Google Cloud Build for deployment to Google Cloud Run. The remote MCP server was validated with a standard browser, and Gemini CLI.
Finally- remote MCP operations were performed from the local Gemini CLI installation to the C++ MCP server hosted in Google Cloud Run.
This approach can be extended to more complex deployments and Cloud based options.





Top comments (0)