Leveraging the Google Agent Development Kit (ADK) and the underlying Gemini LLM to build cross cloud apps with the Python programming language deployed to the Azure Fabric service.
Aren’t There a Billion Python MCP Demos?
Yes there are.
Python has traditionally been the main coding language for ML and AI tools. The goal of this article is to provide a minimal viable basic working MCP stdio server that can be run locally without any unneeded extra code or extensions.
What Is Python?
Python is an interpreted language that allows for rapid development and testing and has deep libraries for working with ML and AI:
Python Version Management
One of the downsides of the wide deployment of Python has been managing the language versions across platforms and maintaining a supported version.
The pyenv tool enables deploying consistent versions of Python:
GitHub - pyenv/pyenv: Simple Python version management
As of writing — the mainstream python version is 3.13. To validate your current Python:
admin@ip-172-31-70-211:~/gemini-cli-azure$ python --version
Python 3.13.12
Azure Fabric
Microsoft Fabric is a comprehensive, AI-powered cloud platform that unifies data engineering, data warehousing, data science, and business intelligence (using Power BI) into a single SaaS solution, built on the OneLake storage system. It simplifies data management by reducing reliance on disparate, siloed tools.
More details are here:
What is Microsoft Fabric - Microsoft Fabric
Why would I want Gemini CLI with Azure? Isn’t that a Google Thing?
Yes- Gemini CLI leverages the Google Cloud console and Gemini models but it is also open source and platform agnostic. Many applications are already cross-cloud so this enables familiar tools to be run natively on Microsoft Azure.
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
gemini
admin@ip-172-31-70-211:~/gemini-cli-azure$ gemini
▝▜▄ Gemini CLI v0.33.1
▝▜▄
▗▟▀ Logged in with Google /auth
▝▀ Gemini Code Assist Standard /upgrade
? for shortcuts
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
shift+tab to accept edits 3 GEMINI.md files | 1 MCP server
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> Type your message or @path/to/file
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
~/.../gemini-cli-azure (main*) no sandbox (see /docs) /model Auto (Gemini 3) | 239.8 MB
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Docker Version Management
The Azure CLI tools need current version of Docker. If your environment does not provide a recent docker tool- the Docker Version Manager can be used to downlaod the latest supported Docker:
Azure CLI
The Azure CLI provides a command line tool to directly access Azure services from your current environment. Full details on the CLI are available here:
Azure Command-Line Interface (CLI) documentation
Agent Development Kit
The Google Agent Development Kit (ADK) is an open-source, Python-based framework designed to streamline the creation, deployment, and orchestration of sophisticated, multi-agent AI systems. It treats agent development like software engineering, offering modularity, state management, and built-in tools (like Google Search) to build autonomous agents.
The ADK can be installed from here:
This seems like a lot of Configuration!
Getting the key tools in place is the first step to working across Cloud environments. For a deeper dive- a project with a similar setup can be found here:
Deploying ADK Agents on Azure Fabric
Where do I start?
The strategy for starting multimodal real time cross cloud agent development is a incremental step by step approach.
The agents in the demo are based on the original code lab:
Way Back Home - Building an ADK Bi-Directional Streaming Agent | Google Codelabs
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, a minimal ADK Agent is built with the visual builder. Next — the entire solution is deployed to Google Cloud Run.
Setup the Basic Environment
At this point you should have a working Python environment and a working Gemini CLI installation. All of the relevant code examples and documentation is available in GitHub. This repo has a wide variety of samples- but this lab will focus on the ‘level_3-lightsail’ setup.
The next step is to clone the GitHub repository to your local environment:
cd ~
git clone https://github.com/xbill9/gemini-cli-azure
cd level_3-aks
Then run init.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
source init.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Verify The ADK Installation
To verify the setup, run the ADK CLI locally with Agent1:
xbill@penguin:~/gemini-cli-azure/level_3-fabric/backend/app$ adk run biometric_agent
Log setup complete: /tmp/agents_log/agent.20260403_091821.log
To access latest log: tail -F /tmp/agents_log/agent.latest.log
/home/xbill/.local/lib/python3.13/site-packages/google/adk/cli/cli.py:204: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
credential_service = InMemoryCredentialService()
/home/xbill/.local/lib/python3.13/site-packages/google/adk/auth/credential_service/in_memory_credential_service.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
super(). __init__ ()
Running agent biometric_agent, type exit to exit.
Deploying to Azure Kubernates Service
The first step is to refresh the Azure credentials in the current build environment:
xbill@penguin:~/gemini-cli-azure/level_3-fabric$ az login
Run the deploy version on the local system:
xbill@penguin:~/gemini-cli-azure/level_3-fabric$ make deploy
./deploy-fabric.sh
Ensuring Resource Group exists... 0.0s 0.0s
You can validate the final result by checking the messages:
> make endpoint
✦ I will run the make endpoint command to retrieve the URL of your deployed Azure Container App.
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ Shell make endpoint [current working directory /home/xbill/gemini-cli-azure/level_3-fabric] (Running 'make endpoint' to retrieve … │
│ │
│ biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The endpoint for your Biometric Scout app is:
biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io
You can use this URL to access the frontend or integrate with other services in Microsoft Fabric.
And the status:
✦ Your Azure Container App is successfully deployed and running.
* Name: biometric-scout-app
* State: Succeeded
* URL: biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io ? for shortcut
The service will be visible in the Azure console:
Running the Web Interface
Start a connection to the deployed app:
biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io
Then connect tothe app :
Then use the Live model to process audio and video:
Finally — complete the sequence:
Fabric Integration Options
There are several options for deeper Fabric integrations:
✦ The biosync.sh script is just a local runner for the backend. To use this in Fabric (the Microsoft data platform), you would typically
follow these steps:
1. Integration via Fabric Notebooks
You can use the Gemini 2.0 Flash model deployed in your Container App to process data directly from Fabric Lakehouses or Warehouses.
* Action: In a Fabric Notebook, use ipython or requests to send audio/biometric data from your Lakehouse to the /api/verify endpoint of
your deployed app: https://biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io.
2. Fabric Real-Time Intelligence (KQL)
If you are streaming biometric telemetry into a Fabric Eventhouse (KQL):
* Action: Set up a Data Activator in Fabric. When a specific biometric event occurs (e.g., "Unauthorized Access Attempt"), have Fabric
call a Webhook pointing to your Azure Container App to trigger the agent's voice-based verification logic.
3. Custom AI Skill
Microsoft Fabric allows you to create "AI Skills."
* Action: You can wrap your Container App's logic as a custom API and register it in the Fabric API Management or as a Custom Connector.
This allows users to "chat" with their biometric data using the logic you've built in agent.py.
4. Direct Frontend Embed
The simplest way to "see" it in Fabric is to add a Web Page dashboard item in a Fabric Workspace and paste your deployment URL:
https://biometric-scout-app.greenwater-07aa7f1f.canadaeast.azurecontainerapps.io
Summary
The Agent Development Kit was used to enable a multi-modal agent using the Gemini Live Model. This Agent was tested locally with the CLI and then deployed to Azure Fabric services. This approach validates that cross cloud tools can be used — even with more complex agents.





Top comments (0)