DEV Community

Cover image for Understanding Google Maps Grounding with ADK (Part 2/5)
Claret Ibeawuchi
Claret Ibeawuchi

Posted on

Understanding Google Maps Grounding with ADK (Part 2/5)

πŸ“š This is Part 2 of a 5-part series on building production-ready AI agents with Google's Agent Development Kit (ADK):

  • βœ… Part 1: Google Search Grounding with ADK (complete this first!)
  • πŸ“ Part 2: Understanding Google Maps Grounding with ADK (you are here)
  • Part 3: Building a Full-Stack Frontend with CopilotKit & AG-UI
  • Part 4: Persistent Sessions with PostgreSQL & Docker
  • Part 5: Production Deployment on Cloud Run

⚠️ Important: If you haven't completed Part 1, start there first! This tutorial builds directly on the concepts and setup from Part 1.


Introduction

Welcome back! In Part 1, we built an AI agent with Google Search grounding that can access real-time web information. Now in Part 2, we're adding location intelligence with Google Maps grounding.

Google Maps Grounding is a powerful feature in the Agent Development Kit (ADK) that enables AI agents to access information from Google Maps' vast database of places, businesses, and location data. By connecting your agents to Google Maps, you can provide users with accurate, location-aware responses grounded in real-world place information.

This feature is particularly valuable for queries requiring information about:

  • 🏒 Businesses, restaurants, and services
  • πŸ“ Addresses and locations
  • ⭐ Ratings, reviews, and hours of operation
  • πŸš— Directions and distance information

When your agent determines that location-based information is needed, it automatically searches Google Maps and incorporates the results into its response with proper attribution.

The Challenge: Limited Documentation

In Part 1, we had solid documentation for Search grounding. Part 2 is different. Unlike Vertex AI Search grounding, which has comprehensive documentation, Google Maps grounding in ADK is still evolving. When I built this integration, I had:

  1. Google Cloud's Vertex AI Maps Grounding docs - for the underlying API
  2. The ADK Python source code - the implementation
  3. No working ADK-specific examples or quickstart guides

I spent hours debugging, reading source code, and testing different approaches. This article documents what I learned so you can skip the struggle and use this as the Maps grounding documentation that doesn't yet exist officially.

What You'll Learn

Building on the foundation from Part 1, in this guide you'll discover:

  • Quick Setup: How to create and run a Google Maps-enabled agent from scratch
  • Grounding Architecture: The data flow and technical process behind Maps grounding
  • Response Structure: How to interpret grounded responses and their metadata
  • Current Limitations: Understanding what Maps grounding can and cannot do (yet)
  • Workarounds: The Agent-as-Tool pattern that makes everything work reliably
  • Best Practices: Guidelines for production implementations

Google Maps Grounding Quickstart

This quickstart guides you through creating an ADK agent with Google Maps grounding capability. This quickstart assumes a local IDE (VS Code or PyCharm, etc.) with Python 3.9+ and terminal access.

Prerequisites

Before starting Part 2, ensure you have:

βœ… Completed Part 1 - Search grounding agent working

βœ… Google Cloud Project - With billing enabled

βœ… Vertex AI API enabled - This is critical for Maps grounding

βœ… Google Cloud SDK (gcloud) - Authenticated locally

βœ… Python 3.12+ and your virtual environment activated

⚠️ Haven't completed Part 1? Start there first! This tutorial assumes you understand the Agent-as-Tool pattern and have ADK set up.

Important: Unlike Google Search (which can use API keys), Maps grounding REQUIRES Vertex AI. There's no fallback to the standard Gemini API.

Vertex AI vs Standard Gemini API

1. Set up Environment & Install ADK

If you completed Part 1, you already have ADK installed. If starting fresh:

Create & Activate Virtual Environment:

# Create
python -m venv .venv

# Activate (each new terminal)
# macOS/Linux: source .venv/bin/activate
# Windows CMD: .venv\Scripts\activate.bat
# Windows PowerShell: .venv\Scripts\Activate.ps1
Enter fullscreen mode Exit fullscreen mode

Install ADK:

pip install google-adk
pip install python-dotenv  # For environment variables
Enter fullscreen mode Exit fullscreen mode

2. Enable Vertex AI and Authenticate

Enable Vertex AI API:

# Set your project ID
export PROJECT_ID="your-project-id"
gcloud config set project $PROJECT_ID

# Enable Vertex AI API
gcloud services enable aiplatform.googleapis.com

# Verify it's enabled
gcloud services list --enabled | grep aiplatform
Enter fullscreen mode Exit fullscreen mode

Expected output:

aiplatform.googleapis.com    Vertex AI API
Enter fullscreen mode Exit fullscreen mode

Authenticate with Application Default Credentials:

Maps grounding uses Application Default Credentials (ADC), not API keys:

# First authenticate with gcloud
gcloud auth login

# Then set up application-default credentials (this is different!)
gcloud auth application-default login
Enter fullscreen mode Exit fullscreen mode

Verify authentication:

# Check your credentials file exists
ls -la ~/.config/gcloud/application_default_credentials.json
Enter fullscreen mode Exit fullscreen mode

Common Issue: If you get DefaultCredentialsError, ensure you've run both gcloud auth login and gcloud auth application-default login. They serve different purposes!

3. Create Agent Project

If building on Part 1, you already have the project structure. If starting fresh:

macOS/Linux:

# Step 1: Create a new directory for your agent
mkdir maps_grounding_agent

# Step 2: Create __init__.py for the agent
echo "from . import agent" > maps_grounding_agent/__init__.py

# Step 3: Create agent.py and .env files
touch maps_grounding_agent/agent.py .env
Enter fullscreen mode Exit fullscreen mode

Windows:

# Step 1: Create a new directory for your agent
mkdir maps_grounding_agent

# Step 2: Create __init__.py for the agent
echo from . import agent > maps_grounding_agent\__init__.py

# Step 3: Create agent.py and .env files
type nul > maps_grounding_agent\agent.py
type nul > .env
Enter fullscreen mode Exit fullscreen mode

4. Configure Environment Variables

Open the .env file and add your configuration:

# .env

# Google API Key (optional, for non-Vertex operations)
GOOGLE_API_KEY="your-google-api-key"

# Vertex AI Configuration (REQUIRED for Maps grounding)
GOOGLE_GENAI_USE_VERTEXAI="true"
GOOGLE_CLOUD_PROJECT="your-project-id"  # e.g., "my-ai-project-123456"
GOOGLE_CLOUD_LOCATION="us-central1"      # or your preferred region
Enter fullscreen mode Exit fullscreen mode

Critical Notes:

  • GOOGLE_GENAI_USE_VERTEXAI="true" tells ADK to route grounding tools through Vertex AI
  • Replace your-project-id with your actual Google Cloud Project ID
  • Ensure no extra quotes around values (use true not "true")

5. Edit agent.py

Copy and paste the following code into agent.py. This implements the Agent-as-Tool pattern, which is crucial for Maps grounding to work reliably.

maps_grounding_agent/agent.py:

"""
Google Maps Grounding Agent with ADK
Using the Agent-as-Tool pattern for reliable grounding
"""

from dotenv import load_dotenv
load_dotenv()

import os

# CRITICAL: Ensure Vertex AI environment variables are set in Python process
# This must happen BEFORE importing ADK modules
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "false"))
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", os.getenv("GOOGLE_CLOUD_PROJECT", ""))
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", os.getenv("GOOGLE_CLOUD_LOCATION", "us-central1"))

# NOW import ADK modules
from google.adk.agents import LlmAgent
from google.adk.tools import AgentTool
from google.adk.tools.google_maps_grounding_tool import GoogleMapsGroundingTool

# Helper function for Vertex AI model path
def _build_vertex_model_name(default_model: str = "gemini-2.5-flash") -> str:
    """Build fully qualified Vertex AI model name."""
    project_id = os.getenv("GOOGLE_CLOUD_PROJECT")
    location = os.getenv("GOOGLE_CLOUD_LOCATION", "us-central1")

    if project_id:
        return f"projects/{project_id}/locations/{location}/publishers/google/models/{default_model}"
    return default_model

# Create dedicated Maps grounding agent
# This agent ONLY uses GoogleMapsGroundingTool (Agent-as-Tool pattern)
maps_agent = LlmAgent(
    model=_build_vertex_model_name(),  # Use full Vertex AI path
    name='MapsAgent',
    instruction="""You are a location research specialist. Use Google Maps grounding to find 
    information about places, businesses, and locations. Always provide:
    - Full addresses
    - Ratings and review counts
    - Hours of operation when available
    - Phone numbers and websites when available
    Cite sources and be concise but comprehensive.""",
    tools=[GoogleMapsGroundingTool()],  # ONLY this tool
)

# Main agent that uses Maps grounding via AgentTool
root_agent = LlmAgent(
    name="maps_grounding_agent",
    model="gemini-2.5-flash",
    instruction="""Answer questions about places, businesses, and locations using the MapsAgent.

    When users ask about:
    - Finding restaurants, cafes, or businesses
    - Addresses and locations
    - Ratings, hours, or business details

    Always use the MapsAgent tool to get accurate, real-time information from Google Maps.
    Provide detailed, helpful responses with source attribution.""",
    description="Location-aware assistant with Google Maps grounding capabilities",
    tools=[AgentTool(agent=maps_agent)]  # Wrap Maps agent as a tool
)
Enter fullscreen mode Exit fullscreen mode

Why This Pattern?

The Agent-as-Tool pattern is essential for Maps grounding. Here's why:

  1. Isolation: The Maps agent operates in its own context, preventing conflicts with other tools
  2. Vertex AI Routing: The full model path ensures requests go through Vertex AI, not the standard Gemini API
  3. Environment Variables: Setting os.environ before imports ensures ADK picks up Vertex AI configuration
  4. Tool Wrapping: Using AgentTool(agent=maps_agent) allows the main agent to delegate location queries to the specialized Maps agent

Without this pattern, you'll encounter:

  • ValueError: google_maps parameter is not supported in Gemini API
  • Inconsistent environment variable loading
  • Silent failures where the tool is called but produces no results

Now you would have the following directory structure:

my_project/
    maps_grounding_agent/
        __init__.py
        agent.py
    .env
Enter fullscreen mode Exit fullscreen mode

6. Run Your Agent

There are multiple ways to interact with your agent:

Dev UI (adk web):

adk web
Enter fullscreen mode Exit fullscreen mode

Note for Windows users: If you hit _make_subprocess_transport NotImplementedError, use adk web --no-reload instead.

Step 1: Open the URL provided (usually http://localhost:8000 or http://127.0.0.1:8000) in your browser.

Step 2: In the top-left corner, select "maps_grounding_agent" from the dropdown.

Step 3: Start asking location-based questions!

Terminal (adk run):

adk run maps_grounding_agent
Enter fullscreen mode Exit fullscreen mode

To exit, use Cmd/Ctrl+C.

πŸ“ Example Prompts to Try

With these questions, you can confirm that the agent is actually calling Google Maps grounding:

  • "Find 3 highly rated restaurants in Times Square, New York"
  • "What is the address of the Empire State Building?"
  • "Find coffee shops near Central Park"
  • "What are the hours for Starbucks in downtown Seattle?"
  • "Tell me about the Louvre Museum in Paris"

You should see the agent return specific place information with addresses, ratings, hours, and other details from Google Maps.

Example Response:

Here are 3 highly rated restaurants in Times Square, New York:

1. **Carmine's Italian Restaurant**
   πŸ“ Address: 200 W 44th St, New York, NY 10036
   ⭐ Rating: 4.5/5 (8,234 reviews)
   πŸ•’ Hours: 11:30 AM - 11:00 PM
   Known for family-style Italian cuisine

2. **Ellen's Stardust Diner**
   πŸ“ Address: 1650 Broadway, New York, NY 10019
   ⭐ Rating: 4.3/5 (12,456 reviews)
   πŸ•’ Hours: 7:00 AM - 12:00 AM
   Famous for singing waitstaff

3. **Junior's Restaurant**
   πŸ“ Address: 1515 Broadway, New York, NY 10036
   ⭐ Rating: 4.4/5 (6,789 reviews)
   πŸ•’ Hours: 6:30 AM - 12:00 AM
   Iconic for cheesecake
Enter fullscreen mode Exit fullscreen mode

πŸŽ‰ You've successfully created and interacted with your Google Maps grounding agent using ADK!


How Grounding with Google Maps Works

Grounding with Google Maps is the process that connects your agent to Google's database of 200+ million places worldwide, allowing it to generate accurate responses based on real-world location data. When a user's prompt requires location-based information, the agent's underlying LLM intelligently decides to invoke the GoogleMapsGroundingTool to find relevant place information.

Data Flow Diagram

This diagram illustrates the step-by-step process of how a user query results in a grounded response:

Data Flow Diagram

Detailed Description

The Maps grounding agent uses the data flow described in the diagram to retrieve, process, and incorporate location information into the final answer presented to the user.

  1. User Query: An end-user interacts with your agent by asking a location-based question (e.g., "Find restaurants near Times Square").

  2. ADK Orchestration: The Agent Development Kit orchestrates the agent's behavior and passes the user's message to your agent.

  3. LLM Analysis and Tool-Calling: The agent's LLM (e.g., a Gemini model) analyzes the prompt. If it determines that location information is required, it triggers the grounding mechanism by calling the Maps agent through AgentTool. This is ideal for answering queries about restaurants, businesses, addresses, or any location-based information.

  4. Google Maps Service Interaction: The GoogleMapsGroundingTool interacts with Google Maps' database through Vertex AI (not directly through the standard Gemini API). The service formulates and executes search queries against Google's place database.

  5. Place Retrieval & Ranking: Google Maps retrieves and ranks the most relevant places based on the query, location context, and relevance scoring. This includes addresses, ratings, reviews, hours, and other place attributes.

  6. Context Injection: The Maps service integrates the retrieved place information into the model's context before the final response is generated. This crucial step allows the model to "reason" over real-world location data.

  7. Grounded Response Generation: The LLM, now informed by relevant Google Maps data, generates a response that incorporates the retrieved place information with proper formatting and context.

  8. Response Presentation with Sources: The ADK receives the final grounded response, which includes place references and grounding metadata, and presents it to the user with attribution. This allows end-users to verify the information against Google Maps.


Understanding Grounding with Google Maps Response

When the agent uses Google Maps grounding, it returns detailed information that includes the final text answer and metadata about the places used to generate that answer. This metadata is crucial for verifying the response and providing attribution.

Example of a Grounded Response

The following is an example of the content returned by the model after a grounded query against Google Maps.

Final Answer Text:

Here are 3 highly rated restaurants in Times Square, New York:

1. **Carmine's Italian Restaurant**
   Address: 200 W 44th St, New York, NY 10036
   Rating: 4.5/5 stars (8,234 reviews)
   Hours: 11:30 AM - 11:00 PM
   Known for family-style Italian cuisine

2. **Ellen's Stardust Diner**
   Address: 1650 Broadway, New York, NY 10019
   Rating: 4.3/5 stars (12,456 reviews)
   Hours: 7:00 AM - 12:00 AM
   Famous for singing waitstaff

3. **Junior's Restaurant**
   Address: 1515 Broadway, New York, NY 10036
   Rating: 4.4/5 stars (6,789 reviews)
   Hours: 6:30 AM - 12:00 AM
   Iconic for cheesecake and New York-style deli
Enter fullscreen mode Exit fullscreen mode

Grounding Metadata Snippet:

This is the grounding metadata you will receive when inspecting events or using adk web (on the Response tab):

{
  "groundingMetadata": {
    "groundingChunks": [
      {
        "googleMaps": {
          "title": "Carmine's Italian Restaurant",
          "uri": "https://maps.google.com/maps?cid=1234567890",
          "placeId": "ChIJ...",
          "address": "200 W 44th St, New York, NY 10036"
        }
      },
      {
        "googleMaps": {
          "title": "Ellen's Stardust Diner",
          "uri": "https://maps.google.com/maps?cid=9876543210",
          "placeId": "ChIJ...",
          "address": "1650 Broadway, New York, NY 10019"
        }
      },
      {
        "googleMaps": {
          "title": "Junior's Restaurant",
          "uri": "https://maps.google.com/maps?cid=1122334455",
          "placeId": "ChIJ...",
          "address": "1515 Broadway, New York, NY 10036"
        }
      }
    ],
    "groundingSupports": [
      {
        "groundingChunkIndices": [0, 1, 2],
        "segment": {
          "endIndex": 450,
          "startIndex": 0,
          "text": "Here are 3 highly rated restaurants in Times Square, New York..."
        }
      }
    ],
    "retrievalQueries": [
      "highly rated restaurants Times Square New York",
      "best restaurants near Times Square"
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

How to Interpret the Response

The metadata provides a link between the text generated by the model and the Google Maps places that support it. Here is a step-by-step breakdown:

  • groundingChunks: This is a list of the Google Maps places the model consulted. Each chunk contains:

    • googleMaps: Object with place details
    • title: Name of the place
    • uri: Google Maps URL for the place
    • placeId: Unique Google Maps Place ID
    • address: Full address of the place
  • groundingSupports: This list connects specific sentences in the final answer back to the groundingChunks.

    • segment: This object identifies a specific portion of the final text answer, defined by its startIndex, endIndex, and the text itself.
    • groundingChunkIndices: This array contains the index numbers that correspond to the sources listed in the groundingChunks. For example, [0, 1, 2] means this text segment is supported by all three places.
  • retrievalQueries: This array shows the specific search queries that were executed against Google Maps to find relevant places.


How to Display Grounding Responses with Google Maps

Unlike Google Search grounding, Maps grounding does not require specific display components. However, displaying place information with proper formatting enhances user experience and builds trust.

Optional Citation Display

Since grounding metadata is provided, you can choose to implement place displays based on your application needs:

Simple Text Display (Minimal Implementation):

for event in events:
    if event.is_final_response():
        print(event.content.parts[0].text)

        # Optional: Show source count
        if event.grounding_metadata:
            chunks = event.grounding_metadata.grounding_chunks
            print(f"\nBased on {len(chunks)} places from Google Maps")
Enter fullscreen mode Exit fullscreen mode

Enhanced Place Display (Recommended):

You can implement detailed place displays that show addresses, ratings, and links:

def display_maps_response(event):
    """Display Maps grounding response with enhanced place information"""

    # Show the main response
    if event.content and event.content.parts:
        print(event.content.parts[0].text)

    # Show place details if available
    if hasattr(event, 'grounding_metadata') and event.grounding_metadata:
        metadata = event.grounding_metadata

        if hasattr(metadata, 'grounding_chunks') and metadata.grounding_chunks:
            print(f"\n Sources ({len(metadata.grounding_chunks)} places):")

            for i, chunk in enumerate(metadata.grounding_chunks, 1):
                if hasattr(chunk, 'google_maps'):
                    place = chunk.google_maps
                    print(f"\n{i}. {place.title}")

                    if hasattr(place, 'address'):
                        print(f"   {place.address}")

                    if hasattr(place, 'uri'):
                        print(f"   View on Maps: {place.uri}")

                    if hasattr(place, 'place_id'):
                        print(f"   Place ID: {place.place_id}")
Enter fullscreen mode Exit fullscreen mode

Implementation Considerations

When implementing Google Maps grounding displays:

  1. Place Attribution: Always show that information comes from Google Maps
  2. Simple Integration: Basic text output requires no additional display logic
  3. Optional Enhancements: Add formatted displays with emojis, links, and structured data
  4. Map Links: The uri field provides direct Google Maps links for each place
  5. Place IDs: The placeId field can be used for further integration with Google Maps APIs
  6. Search Queries: The retrievalQueries array shows what searches were performed

Current Limitations and Workarounds

While Google Maps Grounding is powerful, it's important to understand its current state and limitations compared to other grounding tools like Vertex AI Search.

What Maps Grounding CAN Do

βœ… Find businesses, restaurants, and points of interest

βœ… Provide addresses, ratings, and basic place information

βœ… Return general location details and descriptions

βœ… Ground responses with Google Maps data

βœ… Work through Vertex AI with proper authentication

What Maps Grounding CANNOT Do (Yet)

❌ Direct Places API Integration: Unlike direct Places API calls, Maps grounding doesn't provide:

  • Detailed place photos
  • Full review text and user-generated content
  • Advanced place details (wheelchair accessibility, parking info, etc.)
  • Real-time "currently open" status

❌ Advanced Filtering: Limited ability to filter by:

  • Price range ($ - $$$$)
  • Specific amenities (outdoor seating, reservations, etc.)
  • Distance radius from a specific point

❌ Directions Integration: Cannot provide:

  • Turn-by-turn directions
  • Travel time estimates
  • Route alternatives

❌ Custom Maps Display: No built-in support for:

  • Interactive map widgets
  • Custom map markers
  • Map visualizations

Why the Agent-as-Tool Pattern is Required

Based on hours of debugging and reviewing the ADK source code, here's why the Agent-as-Tool pattern is essential:

agent as a tool pattern

The Problem:

# ❌ This approach doesn't work reliably!
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool
from google.adk.tools.google_maps_grounding_tool import GoogleMapsGroundingTool

agent = LlmAgent(
    model='gemini-2.5-flash',
    tools=[
        GoogleMapsGroundingTool(),           # Built-in grounding tool
        FunctionTool(my_custom_function),    # Custom function
    ]
)
Enter fullscreen mode Exit fullscreen mode

Why it fails:

  1. Environment variables aren't consistently picked up by the Python process
  2. The standard model name doesn't route through Vertex AI
  3. Tool execution context conflicts with other FunctionTools
  4. Results in: ValueError: google_maps parameter is not supported in Gemini API

The Solution (Agent-as-Tool Pattern):

# βœ… This works!

# Step 1: Set environment variables in Python process
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", "true")
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", "your-project-id")
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "us-central1")

# Step 2: Create dedicated Maps agent with Vertex model path
maps_agent = LlmAgent(
    model=f"projects/{project}/locations/{location}/publishers/google/models/gemini-2.5-flash",
    name='MapsAgent',
    tools=[GoogleMapsGroundingTool()],  # ONLY this tool
)

# Step 3: Wrap as a tool for the main agent
main_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='MainAgent',
    tools=[
        AgentTool(agent=maps_agent),        # Wrapped Maps agent
        FunctionTool(my_custom_function),   # Your custom tools
    ]
)
Enter fullscreen mode Exit fullscreen mode

Why this works:

  • Environment Variables: os.environ.setdefault() ensures variables are in the process environment before ADK imports
  • Vertex AI Routing: Full model path forces requests through Vertex AI
  • Context Isolation: The Maps agent operates independently, preventing tool conflicts
  • Proper Delegation: The main agent delegates location queries to the specialized Maps agent

This pattern is recommended by the ADK team for all built-in grounding tools and is documented in the ADK built-in tools guide.

Workarounds We Developed

For advanced use cases requiring Places API features, consider hybrid approaches:

def enhanced_place_lookup(place_id: str):
    """
    Hybrid approach: Combine Maps grounding with Places API

    1. Use Maps grounding for initial search
    2. Extract place_id from grounding metadata
    3. Use Places API for detailed information
    """

    # Step 1: Maps grounding gives basic info + place_id
    maps_result = get_place_from_grounding(query)

    # Step 2: Use place_id with Places API for details
    if maps_result and maps_result.place_id:
        detailed_info = places_api.get_place_details(maps_result.place_id)

        # Combine both sources
        return {
            "basic_info": maps_result,    # From grounding
            "detailed_info": detailed_info # From Places API
        }

    return maps_result
Enter fullscreen mode Exit fullscreen mode

Note: This hybrid approach requires:

  • Additional Places API setup and billing
  • Separate API key or credentials
  • Compliance with Places API terms of service

For most use cases, Maps grounding alone is sufficient and provides accurate, up-to-date location information.


Troubleshooting Common Issues

Issue 1: ValueError: google_maps parameter is not supported

Symptoms:

ValueError: google_maps parameter is not supported in Gemini API.
Use Vertex AI for grounding with Google Maps.
Enter fullscreen mode Exit fullscreen mode

Causes:

  1. GOOGLE_GENAI_USE_VERTEXAI not set to "true"
  2. Environment variables not loaded into Python process
  3. Not using the full Vertex AI model path

Solution:

# At the TOP of agent.py (before ADK imports)
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", "true")
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", "your-project-id")
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "us-central1")

# Use full Vertex model path
model = f"projects/{project}/locations/{location}/publishers/google/models/gemini-2.5-flash"
Enter fullscreen mode Exit fullscreen mode

Issue 2: DefaultCredentialsError

Symptoms:

google.auth.exceptions.DefaultCredentialsError: Could not automatically 
determine credentials.
Enter fullscreen mode Exit fullscreen mode

Cause: Application Default Credentials not set up

Solution:

# Run BOTH commands:
gcloud auth login                      # Step 1: Authenticate gcloud
gcloud auth application-default login  # Step 2: Set up ADC (different!)

# Verify credentials exist
ls ~/.config/gcloud/application_default_credentials.json
Enter fullscreen mode Exit fullscreen mode

Issue 3: Agent Doesn't Use Maps Grounding

Symptoms: Agent responds but doesn't call Maps grounding tool

Causes:

  1. Query not specific enough
  2. Agent instruction doesn't indicate when to use Maps
  3. Location doesn't exist or is too vague

Solution:

# Be explicit in agent instructions
instruction="""
IMPORTANT: For ALL location-based queries, use the MapsAgent tool.

Location-based queries include:
- "Find [business type] in [location]"
- "What is the address of [place]?"
- "Tell me about [landmark]"
- "Where is [business name]?"
- Any query mentioning cities, neighborhoods, or specific places

ALWAYS delegate these queries to MapsAgent!
"""

# Use specific queries
"Find restaurants in Times Square, New York"  # βœ… Good - specific location
"Find good food"                               # ❌ Bad - too vague
Enter fullscreen mode Exit fullscreen mode

Issue 4: Slow Response Times

Symptoms: Maps queries take 5-10 seconds

Explanation: This is normal! Maps grounding involves:

  • Geocoding the location (1-2 seconds)
  • Searching Google Maps database (2-3 seconds)
  • Ranking and filtering results (1 second)
  • LLM synthesis (1-2 seconds)

Total: 5-8 seconds is expected

Optimization tips:

# Use streaming to show progress
async for event in runner.run_async(query, session_id):
    if event.content:
        for part in event.content.parts:
            if part.text:
                print(part.text, end='', flush=True)  # Stream as it arrives
Enter fullscreen mode Exit fullscreen mode

Issue 5: Missing Place Details

Symptoms: Results missing ratings, hours, or other details

Cause: Not all places have complete data in Google Maps

Solution:

# Update instruction to handle missing data gracefully
instruction="""
When presenting Maps results:
1. Always include what data IS available
2. If ratings are missing, state "Rating not available"
3. If hours are missing, suggest "Check Google Maps for current hours"
4. Never hallucinate or make up missing information
5. Always cite Google Maps as the source
"""
Enter fullscreen mode Exit fullscreen mode

Summary

Google Maps Grounding transforms AI agents from general-purpose assistants into location-aware systems capable of providing accurate, place-attributed information from Google Maps' vast database. By integrating this feature into your ADK agents, you enable them to:

  • Access place information from Google Maps' database of 200+ million places worldwide
  • Provide location attribution for transparency and trust in place recommendations
  • Deliver comprehensive answers with verifiable place facts including addresses, ratings, and hours
  • Maintain real-time accuracy within Google's constantly updated place database

The grounding process seamlessly connects user queries to Google Maps' place database, enriching responses with relevant location context while maintaining conversational flow. With proper implementation using the Agent-as-Tool pattern, your agents become powerful tools for location-based information discovery and decision-making.

Key Implementation Requirements

  1. Vertex AI is Mandatory: No fallback to standard Gemini API
  2. Agent-as-Tool Pattern: Essential for reliable grounding
  3. Environment Variables: Must be set in Python process before ADK imports
  4. Full Model Path: Use projects/{project}/locations/{location}/publishers/google/models/{model}
  5. Application Default Credentials: ADC authentication required

Current State and Future Expectations

What Works Today:

  • βœ… Place search and recommendations
  • βœ… Addresses, ratings, and basic details
  • βœ… Integration with ADK's Agent-as-Tool pattern
  • βœ… Reliable grounding through Vertex AI

Known Limitations:

  • ❌ No direct Places API integration for advanced details
  • ❌ Limited real-time availability checking
  • ❌ No custom map rendering capabilities
  • ❌ No turn-by-turn directions integration

What's Next:
As Google continues to develop the ADK platform, we expect:

  • Enhanced Places API integration
  • More comprehensive place details
  • Real-time place availability
  • Better documentation and examples

Until then, this article serves as the de facto documentation for Maps grounding with ADK.


What's Next?

πŸŽ‰ Congratulations! You now have a working Google Maps grounding agent!

Coming in Part 3: Full-Stack Frontend

We'll build a beautiful UI for your agent with:

  • CopilotKit for the chat interface
  • AG-UI Protocol for real-time agent communication
  • Next.js frontend with modern UI
  • Streaming responses for better UX

The Complete 5-Part Series

  • βœ… Part 1: Google Search Grounding with ADK
  • βœ… Part 2: Google Maps Grounding with ADK (completed!)
  • Part 3: Full-Stack Frontend with CopilotKit & AG-UI
  • Part 4: Persistent Sessions with PostgreSQL & Docker
  • Part 5: Production Deployment on Cloud Run

By Part 5, you'll have a production-ready, scalable AI agent deployed to the cloud with persistent sessions and a beautiful UI!


Resources


Questions or Issues?

If you're struggling with Maps grounding, drop a comment! I spent hours debugging this so you don't have to. This article documents everything that's not in the official docs yet.

⭐ Star the repo if this saved you time, and follow me for Part 3 where we build the frontend!


About the Author: I'm Claret, an AI Engineer building production agentic systems. When Google's ADK documentation didn't exist for Maps grounding, I dug through source code and debugged for hours to figure it out. This series shares what I learned so you can skip the struggle and use this as your Maps grounding documentation.

Connect with me on LinkedIn | GitHub

Top comments (0)