Leveraging the Google Agent Development Kit (ADK) and the underlying Gemini LLM to build Multi-Agent Applications with A2A protocol support using the Python programming language.
Aren’t There a Billion Python ADK Demos?
Yes there are.
Python has traditionally been the main coding language for ML and AI tools. The goal of this article is to provide a multi-agent test bed for building, debugging, and deploying multi-agent applications.
What you talkin ‘bout Willis?
So what is different about this lab compared to all the others out there?
This is one of the first deep dives into a Multi-Agent application leveraging the advanced tooling of Gemini CLI. The starting point for the demo was an existing Codelab- which was updated and re-engineered with Gemini CLI.
The original Codelab- is here:
Building a Multi-Agent System | Google Codelabs
What Is Python?
Python is an interpreted language that allows for rapid development and testing and has deep libraries for working with ML and AI:
Python Version Management
One of the downsides of the wide deployment of Python has been managing the language versions across platforms and maintaining a supported version.
The pyenv tool enables deploying consistent versions of Python:
GitHub - pyenv/pyenv: Simple Python version management
As of writing — the mainstream python version is 3.13. To validate your current Python:
python --version
Python 3.13.13
Microsoft Fabric
Microsoft Fabric is an all-in-one, SaaS-based data analytics platform that unifies tools like Data Factory, Synapse, and Power BI into a single workspace, powered by AI and the OneLake data lake. It streamlines data engineering, warehousing, and real-time analytics without managing complex infrastructure, designed for both professional and citizen developers. [1, 2, 3, 4, 5]
Key Features and Capabilities:
- OneLake: A single, unified, SaaS-based data lake (“OneDrive for data”) that stores all organization data, reducing silos.
- Unified Components: Integrates Data Factory (ingestion), Synapse (engineering/warehousing), and Power BI (reporting) into one experience.
- AI Integration: Includes Copilot for creating data flows, writing SQL, and generating reports.
- SaaS Experience: Simplifies setup by eliminating the need for infrastructure management like resource groups, unlike traditional Azure services.
- Open Data Formats: Uses Delta Lake/Parquet formats by default, allowing data to be shared across engines without duplication. [1, 2, 3, 4, 5, 6]
More details are available here:
What is Microsoft Fabric? | Microsoft Fabric
Why would I want Gemini CLI with Azure? Isn’t that a Google Thing?
Yes- Gemini CLI leverages the Google Cloud console and Gemini models but it is also open source and platform agnostic. Many applications are already cross-cloud so this enables familiar tools to be run natively on Microsoft Azure.
Azure Fabric Configuration
To configure your Azure Service with the base system tools- this article provides a reference:
MCP Development with Python, and Azure Fabric
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
▝▜▄ Gemini CLI v0.33.1
▝▜▄
▗▟▀ Logged in with Google /auth
▝▀ Gemini Code Assist Standard /upgrade no sandbox (see /docs) /model Auto (Gemini 3) | 239.8 MB
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Agent Development Kit
The Google Agent Development Kit (ADK) is an open-source, Python-based framework designed to streamline the creation, deployment, and orchestration of sophisticated, multi-agent AI systems. It treats agent development like software engineering, offering modularity, state management, and built-in tools (like Google Search) to build autonomous agents.
The ADK can be installed from here:
Agent Skills
Gemini CLI can be customized to work with ADK agents. Both an Agent Development MCP server, and specific Agent skills are available.
More details are here:
To get the Agent Skills in Gemini CLI:
> /skills list
Available Agent Skills:
- adk-cheatsheet
MUST READ before writing or modifying ADK agent code. ADK API quick reference for Python — agent types, tool definitions, orchestration
patterns, callbacks, and state management. Includes an index of all ADK documentation pages. Do NOT use for creating new projects (use
adk-scaffold).
- adk-deploy-guide
MUST READ before deploying any ADK agent. ADK deployment guide — Agent Engine, Cloud Run, GKE, CI/CD pipelines, secrets, observability, and
production workflows. Use when deploying agents to Google Cloud or troubleshooting deployments. Do NOT use for API code patterns (use
adk-cheatsheet), evaluation (use adk-eval-guide), or project scaffolding (use adk-scaffold).
- adk-dev-guide
ALWAYS ACTIVE — read at the start of any ADK agent development session. ADK development lifecycle and mandatory coding guidelines —
spec-driven workflow, code preservation rules, model selection, and troubleshooting.
- adk-eval-guide
MUST READ before running any ADK evaluation. ADK evaluation methodology — eval metrics, evalset schema, LLM-as-judge, tool trajectory
scoring, and common failure causes. Use when evaluating agent quality, running adk eval, or debugging eval results. Do NOT use for API code
patterns (use adk-cheatsheet), deployment (use adk-deploy-guide), or project scaffolding (use adk-scaffold).
- adk-observability-guide
MUST READ before setting up observability for ADK agents or when analyzing production traffic, debugging agent behavior, or improving agent
performance. ADK observability guide — Cloud Trace, prompt-response logging, BigQuery Agent Analytics, third-party integrations, and
troubleshooting. Use when configuring monitoring, tracing, or logging for agents, or when understanding how a deployed agent handles real
traffic.
- adk-scaffold
MUST READ before creating or enhancing any ADK agent project. Use when the user wants to build a new agent (e.g. "build me a search agent")
or enhance an existing project (e.g. "add CI/CD to my project", "add RAG").
and the ADK documentation:
> /mcp list
Configured MCP servers:
🟢 adk-docs-mcp (from adk-docs-ext) - Ready (2 tools)
Tools:
- mcp_adk-docs-mcp_fetch_docs
- mcp_adk-docs-mcp_list_doc_sources
Where do I start?
The strategy for starting multi agent development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, ADK Multi-Agent is built, debugged, and tested locally. Finally — the entire solution is deployed to Azure ACA.
Setup the Basic Environment
At this point you should have a working Python environment and a working Gemini CLI installation. All of the relevant code examples and documentation is available in GitHub.
The next step is to clone the GitHub repository to your local environment:
cd ~
git clone https://github.com/xbill9/gemini-cli-azure
cd mulit-fabric
Then run init2.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
source init2.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Finally install the packages and dependencies:
make install
Verify The ADK Installation
To verify the setup, run the ADK CLI locally with the researcher agent:
xbill@penguin:~/gemini-cli-azure/multi-fabric/agents$ adk run researcher
/home/xbill/.local/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72: UserWarning: [EXPERIMENTAL] feature FeatureName.PLUGGABLE_AUTH is enabled.
check_feature_enabled()
Log setup complete: /tmp/agents_log/agent.20260418_155613.log
To access latest log: tail -F /tmp/agents_log/agent.latest.log
{"asctime": "2026-04-18 15:56:13,235", "name": "root", "levelname": "INFO", "message": "Logging initialized for researcher", "filename": "logging_config.py", "lineno": 54, "service": "researcher", "log_level": "INFO"}
{"asctime": "2026-04-18 15:56:13,236", "name": "researcher.agent", "levelname": "INFO", "message": "Initialized researcher agent with model: gemini-2.5-flash", "filename": "agent.py", "lineno": 85}
Test The ADK Web Interface
This tests the ADK agent interactions with a browser:
xbill@penguin:~/gemini-cli-azure/multi-fabric/agents$ adk web --host 0.0.0.0
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72: UserWarning: [EXPERIMENTAL] feature FeatureName.PLUGGABLE_AUTH is enabled.
check_feature_enabled()
2026-04-10 17:49:11,850 - INFO - service_factory.py:266 - Using in-memory memory service
2026-04-10 17:49:11,850 - INFO - local_storage.py:84 - Using per-agent session storage rooted at /home/xbill/multi-agent/agents
2026-04-10 17:49:11,850 - INFO - local_storage.py:110 - Using file artifact service at /home/xbill/multi-agent/agents/.adk/artifacts
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/cli/fast_api.py:198: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
credential_service = InMemoryCredentialService()
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/auth/credential_service/in_memory_credential_service.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
super(). __init__ ()
INFO: Started server process [16063]
INFO: Waiting for application startup.
+-----------------------------------------------------------------------------+
| ADK Web Server started |
| |
| For local testing, access at http://0.0.0.0:8000. |
+-----------------------------------------------------------------------------+
Then use the web interface — either on the local interface 127.0.0.1 or the catch-all web interface 0.0.0.0 -depending on your environment:
Special note for Google Cloud Shell Deployments- add a CORS allow_origins configuration exemption to allow the ADK agent to run:
adk web --host 0.0.0.0 --allow_origins 'regex:.*'
Multi Agent Design
The multi-agent deployment consists of 5 agents:
- Researcher
- Judge
- Orchestrator
- Content Builder
- Course Builder
An overview of the multi-agent system can be found here:
Multi-Agent A2A with the Agent Development Kit(ADK), Cloud Run, Agent Skills, and Gemini CLI
Running/Testing/Debugging Locally
The main Makefile has been extended with extensive targets for managing the agents on the local development environment.
The key targets include:
✦ Here is the output of make help with the warnings resolved:
1 Available commands:
2 install - Install all dependencies for root, agents, and app
3 install-fabric-cli - Install Microsoft Fabric CLI and Azure extension
4 start - Start all services locally (alias for start-local)
5 stop - Stop all local services (alias for stop-local)
6 run - Start all services locally (alias for start-local)
7 local - Show local service URLs
8 status - Show status of local services and Fabric deployment
9 start-local - Start all local services in background
10 stop-local - Stop all local processes
11 test - Run all tests (pytest)
12 test-researcher - Test the Researcher agent directly
13 test-judge - Test the Judge agent directly
14 test-orchestrator - Test the Orchestrator logic
15 lint - Run linting checks (ruff)
16 deploy - Deploy stacked agents to Azure Container Apps & Fabric
17 destroy - Delete Azure Resource Group and all resources
18 endpoint - Show public service endpoint
19 clean - Remove caches and logs
First check for local running agents:
xbill@penguin:~/gemini-cli-azure/multi-fabric$ make status
make[1]: Entering directory '/home/xbill/gemini-cli-azure/multi-fabric'
Checking status of locally running servers...
--- Network Status ---
No services listening on expected ports (8080, 5173).
--- Process Status ---
Then all the agents can be started together:
xbill@penguin:~/gemini-cli-azure/multi-fabric$ make start
Stopping any existing server processes...
Starting Unified App Backend in background...
Starting Frontend dev server in background...
All services started. Logs: backend.log, frontend.log
Frontend: http://localhost:5173
Backend: http://localhost:8080
make status
xbill@penguin:~/gemini-cli-azure/multi-fabric$ make status
make[1]: Entering directory '/home/xbill/gemini-cli-azure/multi-fabric'
Checking status of locally running servers...
--- Network Status ---
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 19549/python3
--- Process Status ---
xbill 19549 10.1 2.2 251904 147036 pts/1 S 22:34 0:05 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m unified_app.app
The entire project can be linted and tested as unit:
✦ I have completed the linting and formatting task for the project.
Summary of Actions:
- Initial Check: Ran ruff check . and identified 10 errors, including unused imports, unsorted imports, formatting
issues, and multiple statements on one line.
- Automated Fix: Executed ruff check . --fix, which automatically resolved 7 of the 10 errors.
- Manual Fix: Manually corrected the remaining 3 E701 errors in unified_app/app.py by splitting multiple statements
onto separate lines.
- Verification: Verified all checks pass with a final ruff check . execution.
All Python files are now compliant with the project's linting standards.
> make test
✦ I have executed make test for the project.
Summary of Results:
- Total Tests Collected: 29
- Test Modules Run:
- agents/content_builder/tests/test_agent.py (2 passed)
- agents/judge/tests/test_agent.py (3 passed)
- agents/orchestrator/test_orchestrator.py (5 passed)
- agents/orchestrator/test_state_capturer.py (4 passed)
- agents/researcher/tests/test_agent.py (2 passed)
- app/tests/test_a2a_utils.py (2 passed)
- app/tests/test_api.py (3 passed)
- app/tests/test_auth.py (3 passed)
- app/tests/test_logging_config.py (2 passed)
- app/tests/test_utils.py (3 passed)
Outcome: All 29 tests passed successfully. There were a couple of minor warnings related to deprecated modules inside
third-party dependencies (authlib and google.adk), which are expected and do not affect the tests.
And end to end tested:
✦ I have successfully completed the linting and testing suite for the project.
Results Overview:
1. Linting (make lint):
- Resolved 10 issues using ruff.
- Automatically fixed 7 errors (imports/formatting).
- Manually fixed 3 E701 errors in unified_app/app.py by splitting multiple statements into separate lines.
- Final check: Passed.
2. Unit & Integration Tests (make test):
- Executed 29 tests across all microservices and the main application.
- Result: All 29 tests passed.
3. End-to-End Tests (make e2e-test):
- Started the local unified backend and frontend servers.
- Successfully executed the E2E pipeline for the topic: "Create a short course about the history of the internet".
- The system correctly orchestrated the Researcher (gathering info), Judge (evaluating findings), and Content
Builder (writing Markdown modules).
- Result: Successfully generated a 5-module course.
The local services have been stopped to clean up the environment. The codebase is now lint-free and verified as
functionally correct.
And the entire agent system will run in the local environment:
Local Logging / Debugging
Gemini CLI has full access to the local agent logs for debugging and troubleshooting:
✦ The end-to-end test completed successfully!
The system ran the course creation pipeline ("Create a short course about the history of the internet"), executing the
researcher, judge, and content builder agents, and successfully generated the final output without errors.
Deploying to Azure Fabric
The project level Makefile has targets for managing the Agent deployment to serverless endpoints:
A utility script check the deployment to Azure Fabric:
xbill@penguin:~/gemini-cli-azure/multi-fabric$ make status
make[1]: Entering directory '/home/xbill/gemini-cli-azure/multi-fabric'
Checking status of locally running servers...
--- Network Status ---
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 20808/python3
--- Process Status ---
xbill 20808 4.6 2.4 631760 160836 ? Sl 22:43 0:09 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m unified_app.app
make[1]: Leaving directory '/home/xbill/gemini-cli-azure/multi-fabric'
--- Azure Fabric / ACA Status ---
Checking Azure Container Apps status in adk-rg-fabric...
(ResourceGroupNotFound) Resource group 'adk-rg-fabric' could not be found.
make: *** [Makefile:124: status] Error 1
You can then deploy the services:
✓ Shell az account show -o table || echo "Not logged in to Azure" │
│ │
│ EnvironmentName HomeTenantId IsDefault Name State TenantDefaul │
│ tDomain TenantDisplayName TenantId │
│ ----------------- ------------------------------------ ----------- -------------------- ------- ------------ │
│ ------------------- ------------------- ------------------------------------ │
│ AzureCloud 40482c55-d00d-4c6d-8903-643d76a74b9c True Azure subscription 1 Enabled xbillglitnir │
│ com.onmicrosoft.com Default Directory 40482c55-d00d-4c6d-8903-643d76a74b9c │
✦ The deployment to Azure Container Apps (ACA) was successful!
Deployment Summary:
- Resource Group: adk-rg-fabric
- Location: westus2
- Container Registry: adkacrpenguin.azurecr.io
- Public URL: https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io
(https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io)
Verification:
I have verified that the remote service is healthy and the multi-agent pipeline is responding correctly. You can now
use the public URL to access your AI Course Creator.
Microsoft Fabric Integration:
A WorkloadManifest.xml has been generated in the fabric/ directory.
To complete the Fabric integration:
1. Register your backend App in Microsoft Entra ID.
2. Add the Fabric.Extend scope.
3. Upload fabric/WorkloadManifest.xml to the Fabric Admin Portal.
Your system is now fully operational in the cloud!
Once the containers are deployed- you can then get the endpoint:
> make status
✦ Here is the current status of your environments:
Local Services
- Backend (unified_app): Running in the background (PID 20808) and listening on port 8080.
- Frontend: Not currently running locally (you can start it with make start-local or make frontend-local).
Azure Container Apps (Cloud Deployment)
- Status: Succeeded (Running smoothly)
- App FQDN: course-creator.icysky-8598b612.westus2.azurecontainerapps.io
(https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io)
Both your local backend and your Azure cloud deployment are currently active and healthy.
And check the endpoint:
> make endpoint
✦ Here is your public Azure Container Apps endpoint:
https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io
(https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io)
You can use this URL to access your deployed AI Course Creator.
The service will be visible in the Azure console:
Test End to End in Azure
The entire agent system is tested on the remote Azure endpoint:
✦ The E2E test against the cloud-deployed Azure Container Apps endpoint was successful!
I have verified that your public cloud environment is fully functional and correctly orchestrating the multi-agent
course creation pipeline.
Running the Web Interface
Start a connection to the Cloud Run deployed app:
https://course-creator.icysky-8598b612.westus2.azurecontainerapps.io
Then connect to the app :
Then use online course generator:
Final Gemini CLI Code Review
As a final step — Gemini CLI was used for a full code review of the project:
Overall Code Review Summary
After reviewing the app and agents directories, I can say this is an exceptionally well-engineered multi-agent system.
High-Level Architecture:
The architecture is sophisticated and effective. The use of a main orchestrator to manage a pipeline of specialized agents (researcher, judge,
content_builder) is a strong and scalable pattern. The inclusion of a research-and-refine loop with the judge agent is a standout feature that
significantly enhances the quality of the final output.
Key Strengths:
1. Expert ADK Usage: The project demonstrates a deep understanding of the Google ADK, using advanced features like SequentialAgent, LoopAgent,
RemoteA2aAgent, structured Pydantic outputs, and agent callbacks to their full potential.
2. Excellent Prompt Engineering: The instruction prompts for all agents are clear, specific, and well-crafted. This is the foundation of the
system's success.
3. Robust State Management: The custom StateCapturer agent is a brilliant, reusable utility that cleanly handles the flow of information between
agents.
4. Production-Ready Features: The system includes production-grade features like environment-aware authentication for service-to-service calls,
robust error handling, and detailed logging.
Summary
The Agent Development Kit (ADK) was used to build a multi-agent system with A2A support using the Gemini Flash LLM Model. This application was tested locally with Gemini CLI and then deployed to Azure Fabric. Several key take-aways and lessons learned were summarized from debugging and testing the multi-agent system- including deep log reviews. Finally, Gemini CLI was used for a complete project code review.








Top comments (0)