DEV Community

Cover image for Building AI Agents with Google Search Grounding and ADK (Part 1/5)
Claret Ibeawuchi
Claret Ibeawuchi

Posted on

Building AI Agents with Google Search Grounding and ADK (Part 1/5)

This is Part 1 of a 5-part series on building production-ready AI agents with Google's Agent Development Kit (ADK):

  • Part 1: Google Search Grounding with ADK (you are here)
  • Part 2: Adding Google Maps Grounding with Vertex AI
  • Part 3: Building a Full-Stack Frontend with CopilotKit & AG-UI
  • Part 4: Persistent Sessions with PostgreSQL & Docker
  • Part 5: Production Deployment on Cloud Run

Why Search Grounding Matters

Most AI models are trained on data with a cutoff date. About a year ago, if you asked ChatGPT about events from last week, and it can't help you. Or Claude about today's stock prices, and it will apologise. This is the knowledge cutoff problem, and it's a major limitation for production AI applications.

Search Grounding - the ability to ground your AI agent's responses in real-time web data. Instead of hallucinating or saying "I don't know," your agent can:

βœ… Search the web for current information

βœ… Cite sources with clickable links

βœ… Provide factual answers backed by real data

βœ… Stay updated without retraining the model

According to Google Cloud's documentation, grounding ensures responses are based on the latest and most accurate information, which is critical for:

  • News and current events - "What's happening in AI this week?"
  • Financial data - "What's Tesla's stock price today?"
  • Technical troubleshooting - "How do I fix the latest React 19 breaking changes?"
  • Research - "What are the recent findings on climate change?"

The Challenge

Implementing search grounding isn't straightforward. When I built my first search-grounded agent, I spent 3 hours debugging the cryptic error: "Function search is not found".

The issue? Google's Search grounding tool can't be mixed directly with other function tools. You need a specific pattern β€” the Agent-as-Tool pattern β€” which we'll implement in this tutorial.

What we'll build: A Python-based AI agent using ADK that can search the web in real-time and provide current, source-attributed answers.

πŸ”— Full Code: https://github.com/Greyisheep/ag-ui-adk-grounding-app


Prerequisites

Before we start, make sure you have:

  • Python 3.12+ installed
  • A Google AI Studio API key (Get one here)
  • Basic knowledge of Python and async programming
  • 15-20 minutes of your time

Understanding Google Search Grounding

What is Grounding?

Grounding is the process of connecting an LLM's responses to external, verifiable sources. Instead of relying solely on training data, the model:

  1. Executes a search query based on the user's question
  2. Retrieves relevant web results from Google Search
  3. Processes the search results to extract information
  4. Generates a response grounded in those results
  5. Returns source attribution with links and metadata

The Benefits

According to Google's grounding documentation, you can:

  • βœ… Ensure responses are based on current, accurate information
  • βœ… Retrieve artifacts from the web for analysis and reasoning
  • βœ… Find region-specific information with localization
  • βœ… Perform technical troubleshooting with latest docs
  • βœ… Provide source attribution for transparency and trust

Real-World Use Cases

1. Customer Support Agents

User: "How do I fix the error in the latest version of your API?"
Agent: *Searches documentation* β†’ Provides fix with link to release notes
Enter fullscreen mode Exit fullscreen mode

2. Research Assistants

User: "What are the latest developments in quantum computing?"
Agent: *Searches recent papers* β†’ Summarizes findings with citations
Enter fullscreen mode Exit fullscreen mode

3. News Aggregation

User: "What's the top tech news today?"
Agent: *Searches news sites* β†’ Delivers current stories with sources
Enter fullscreen mode Exit fullscreen mode

4. Financial Analysis

User: "What's driving the market today?"
Agent: *Searches financial news* β†’ Provides analysis with data sources
Enter fullscreen mode Exit fullscreen mode

Part 1: Setting Up Your ADK Project

Let's create a Python project with ADK from scratch.

1.1 Create Project Structure

# Create project directory
mkdir adk-search-agent
cd adk-search-agent

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Create project files
touch agent.py
touch requirements.txt
touch .env
touch .gitignore
Enter fullscreen mode Exit fullscreen mode

1.2 Add Dependencies

Create requirements.txt:

google-adk
google-genai
python-dotenv
Enter fullscreen mode Exit fullscreen mode

Install dependencies:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

1.3 Configure Environment Variables

Create .env file:

# .env
GOOGLE_API_KEY="your-google-api-key-here"
Enter fullscreen mode Exit fullscreen mode

Create .gitignore:

# .gitignore
venv/
.env
*.log
__pycache__/
*.pyc
Enter fullscreen mode Exit fullscreen mode

Security Note: Never commit your .env file! The .gitignore ensures your API key stays secure.


Part 2: The Agent-as-Tool Pattern Explained

This is the critical pattern you need to understand for search grounding.

The Problem: Direct Tool Mixing Fails

My first attempt looked like this:

# This approach doesn't work!
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool
from google.adk.tools.google_search_tool import GoogleSearchTool

def my_custom_function() -> str:
    """Custom business logic"""
    return "Some result"

# This will fail!
agent = LlmAgent(
    model='gemini-2.5-flash',
    tools=[
        GoogleSearchTool(),                    # Built-in grounding tool
        FunctionTool(my_custom_function),      # Custom function
    ]
)
Enter fullscreen mode Exit fullscreen mode

Error: "Function search is not found"

Why it fails:

  • Google Search grounding operates in its own execution context
  • It uses Google's Search API with special handling
  • Mixing it directly with FunctionTools causes context conflicts
  • The ADK can't properly resolve the search function calls

The Solution: Agent-as-Tool Pattern

The fix is to isolate the search tool in a dedicated agent, then wrap that agent as a tool:

# This works!
from google.adk.tools import AgentTool

# Step 1: Create a dedicated search agent
search_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='SearchAgent',
    instruction="You are a specialist in Google Search grounding.",
    tools=[GoogleSearchTool()],  # ONLY the search tool
)

# Step 2: Wrap it as a tool for the main agent
main_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='MainAgent',
    instruction="You are a helpful assistant.",
    tools=[
        AgentTool(agent=search_agent),        # Wrapped search agent
        FunctionTool(my_custom_function),     # Your custom tools
    ]
)
Enter fullscreen mode Exit fullscreen mode

Why This Pattern Works

Isolation: Each agent has its own context

  • The search_agent focuses solely on search grounding
  • No conflicts with other tools or execution contexts

Delegation: The main agent delegates to the search agent

  • When the main agent needs current information, it calls the search agent
  • The search agent executes the search and returns results
  • The main agent incorporates those results into its response

Flexibility: You can add multiple specialized agents

  • Search agent for web grounding
  • Maps agent for location grounding (Part 2!)
  • Database agent for data retrieval
  • Each isolated in its own context

Recommended by Google: This pattern is documented in the ADK built-in tools guide as the correct approach for grounding tools.


Part 3: Building Your Search-Grounded Agent

Now let's build the complete agent. We'll start simple and add complexity.

3.1 Basic Setup and Imports

Create agent.py and add:

"""
ADK Search-Grounded Agent
Demonstrates the Agent-as-Tool pattern for Google Search grounding
"""

from dotenv import load_dotenv
load_dotenv()

import os
import asyncio
import logging
from datetime import datetime

# ADK imports
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool, AgentTool
from google.adk.tools.google_search_tool import GoogleSearchTool
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner

# Setup logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

# Verify API key is loaded
if not os.getenv('GOOGLE_API_KEY'):
    raise ValueError("GOOGLE_API_KEY not found in environment variables")

logger.info("ADK Search Agent initialized")
Enter fullscreen mode Exit fullscreen mode

3.2 Add a Custom Tool (Optional)

Let's add a simple custom function to demonstrate tool mixing:

def get_current_time() -> str:
    """
    Returns the current date and time.
    Useful for time-sensitive queries.
    """
    current = datetime.now()
    return f"Current time: {current.strftime('%Y-%m-%d %H:%M:%S %Z')}"

logger.info("Custom tools defined")
Enter fullscreen mode Exit fullscreen mode

3.3 Create the Search Agent

This agent is dedicated to search grounding only:

# Create dedicated search agent
search_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='SearchAgent',
    instruction="""You are a specialist in Google Search grounding.

    Your role:
    - Use web search to find current, factual information
    - Provide structured, well-organized findings
    - Always cite your sources with URLs
    - Focus on accuracy and recency

    When you find information, format it clearly with:
    1. Direct answer to the query
    2. Supporting details and context
    3. Source attribution with links""",
    tools=[GoogleSearchTool()],  # ONLY this tool - critical!
)

logger.info("Search agent created")
Enter fullscreen mode Exit fullscreen mode

Key points:

  • model='gemini-2.5-flash': Fast, cost-effective for search tasks
  • name='SearchAgent': Helpful for debugging and logging
  • instruction: Clear, specific role definition
  • tools=[GoogleSearchTool()]: Only the search tool β€” no mixing!

3.4 Create the Main Agent

The main agent coordinates everything:

# Create main agent with search capability
main_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='MainAgent',
    instruction="""You are a helpful AI assistant with access to:

    1. Real-time web search for current information
    2. Current time/date information

    Guidelines:
    - When asked about CURRENT events, RECENT news, or TIME-SENSITIVE 
      information, you MUST use the search capability
    - When asked about the time or date, use the time tool
    - Always cite sources when using search results
    - Be concise but comprehensive
    - Admit when you don't have current information and suggest using search

    Example queries that require search:
    - "What's the latest news about [topic]?"
    - "What happened today in [domain]?"
    - "Recent developments in [field]"
    - "Current status of [event]"
    """,
    tools=[
        AgentTool(agent=search_agent),      # Search grounding
        FunctionTool(get_current_time),     # Custom function
    ]
)

logger.info("Main agent created with search capability")
Enter fullscreen mode Exit fullscreen mode

3.5 Setup Runner and Session Management

# Create session service (in-memory for now, PostgreSQL in Part 4!)
session_service = InMemorySessionService()

# Create runner
runner = Runner(
    agent=main_agent,
    session_service=session_service,
    app_name="search_agent_app"
)

logger.info("Runner initialized")
Enter fullscreen mode Exit fullscreen mode

Part 4: Testing Your Search Agent

4.1 Create a Query Function

Add this helper function to run queries:

async def run_query(query: str, session_id: str = None):
    """
    Run a single query through the agent

    Args:
        query: The user's question
        session_id: Optional session ID for conversation continuity
    """
    print(f"\n{'='*60}")
    print(f"πŸ” QUERY: {query}")
    print(f"{'='*60}\n")

    # Create or reuse session
    if not session_id:
        session = session_service.create_session_sync()
        session_id = session.session_id

    # Run the agent
    response_text = ""
    async for event in runner.run_async(
        user_message=query,
        session_id=session_id
    ):
        # Collect text content from events
        if hasattr(event, 'content') and event.content:
            for part in event.content.parts:
                if hasattr(part, 'text') and part.text:
                    response_text += part.text

    print(f"RESPONSE:")
    print(f"{response_text}\n")
    print(f"{'-'*60}\n")

    return session_id
Enter fullscreen mode Exit fullscreen mode

4.2 Create Test Cases

Add a main function with test queries:

async def main():
    """Run test queries"""

    print("\n" + "="*60)
    print("ADK Search-Grounded Agent - Test Suite")
    print("="*60 + "\n")

    # Test 1: Current events (requires search)
    await run_query(
        "What are the latest developments in artificial intelligence this week?"
    )

    # Test 2: Recent news (requires search)
    await run_query(
        "What are today's top technology news stories?"
    )

    # Test 3: Factual query with recency (requires search)
    await run_query(
        "Tell me about recent breakthroughs in quantum computing"
    )

    # Test 4: Simple time query (uses custom function)
    await run_query(
        "What's the current time?"
    )

    # Test 5: Mixed query (might use both)
    await run_query(
        "What major tech events happened today?"
    )

    print("\n" + "="*60)
    print("Test suite completed!")
    print("="*60 + "\n")

if __name__ == "__main__":
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

4.3 Run Your Agent

python agent.py
Enter fullscreen mode Exit fullscreen mode

4.4 Expected Output

You should see output like this:

============================================================
ADK Search-Grounded Agent - Test Suite
============================================================

============================================================
QUERY: What are the latest developments in artificial intelligence this week?
============================================================

RESPONSE:
Based on recent web sources, here are the latest AI developments:

1. **OpenAI's GPT-4 Vision Capabilities**
   OpenAI has announced enhanced multimodal capabilities for GPT-4, 
   allowing it to process and analyze images alongside text.
   Source: https://techcrunch.com/2024/...

2. **EU AI Act Implementation**
   The European Union has begun enforcing its comprehensive AI 
   regulations, affecting companies worldwide.
   Source: https://reuters.com/technology/...

3. **Google's Gemini Advanced**
   Google released Gemini Advanced with improved reasoning and 
   coding capabilities.
   Source: https://blog.google/technology/...

------------------------------------------------------------
Enter fullscreen mode Exit fullscreen mode

Part 5: Understanding Grounding Metadata

One of the most powerful features of search grounding is the metadata returned with each response.

5.1 Metadata Structure

According to Google's documentation, the response includes:

{
  "content": "The AI-generated response text",
  "groundingMetadata": {
    "webSearchQueries": ["actual search query executed"],
    "groundingChunks": [
      {
        "web": {
          "uri": "https://example.com/article",
          "title": "Article Title"
        }
      }
    ],
    "groundingSupports": [
      {
        "segment": {
          "text": "Specific text segment from response",
          "startIndex": 0,
          "endIndex": 50
        },
        "groundingChunkIndices": [0],
        "confidenceScores": [0.95]
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Key Fields:

  • webSearchQueries: The actual search queries the agent executed
  • groundingChunks: Web sources used, with URLs and titles
  • groundingSupports: Maps response segments to sources
  • confidenceScores: Confidence in the grounding (0.0 to 1.0)

5.2 Accessing Metadata in Code

Enhance your run_query function to display metadata:

async def run_query_with_metadata(query: str):
    """Run query and display grounding metadata"""

    print(f"\nπŸ” Query: {query}\n")

    session = session_service.create_session_sync()

    sources = []
    response_text = ""

    async for event in runner.run_async(
        user_message=query,
        session_id=session.session_id
    ):
        # Collect response text
        if hasattr(event, 'content') and event.content:
            for part in event.content.parts:
                if hasattr(part, 'text') and part.text:
                    response_text += part.text

        # Collect grounding metadata
        if hasattr(event, 'grounding_metadata'):
            metadata = event.grounding_metadata

            if hasattr(metadata, 'web_search_queries'):
                print(f"Search Queries Executed:")
                for query in metadata.web_search_queries:
                    print(f"   - {query}")

            if hasattr(metadata, 'grounding_chunks'):
                print(f"\nSources ({len(metadata.grounding_chunks)}):")
                for i, chunk in enumerate(metadata.grounding_chunks[:5], 1):
                    if hasattr(chunk, 'web'):
                        print(f"   {i}. {chunk.web.title}")
                        print(f"      {chunk.web.uri}")
                        sources.append(chunk.web.uri)

            if hasattr(metadata, 'grounding_supports'):
                print(f"\nGrounding Confidence:")
                for support in metadata.grounding_supports[:3]:
                    if hasattr(support, 'confidence_scores'):
                        avg_confidence = sum(support.confidence_scores) / len(support.confidence_scores)
                        print(f"   - {avg_confidence:.2%}")

    print(f"\nResponse:\n{response_text}\n")

    return sources
Enter fullscreen mode Exit fullscreen mode

Part 6: The Complete Execution Flow

Understanding what happens under the hood:

1. User asks: "What's the latest AI news?"
        ↓
2. Main Agent receives query
        ↓
3. Main Agent analyzes: "This requires current information"
        ↓
4. Main Agent decides to call AgentTool(search_agent)
        ↓
5. Search Agent receives the delegated query
        ↓
6. Search Agent calls GoogleSearchTool()
        ↓
7. Google Search API:
   - Executes web search
   - Retrieves relevant results
   - Extracts content and metadata
        ↓
8. Search results return to Search Agent
        ↓
9. Search Agent processes and structures the information
        ↓
10. Structured results return to Main Agent
        ↓
11. Main Agent synthesizes final response
        ↓
12. Response delivered to user with sources
Enter fullscreen mode Exit fullscreen mode

Why Separate Agents?

Execution Context Isolation

  • Search grounding needs its own event loop
  • Prevents conflicts with other async operations
  • Allows proper error handling and retries

Clear Separation of Concerns

  • Search Agent: "I only do search, and I do it well"
  • Main Agent: "I coordinate and synthesize"
  • Each agent has a focused, well-defined role

Scalability

  • Easy to add more specialized agents (Maps in Part 2!)
  • Can run agents in parallel if needed
  • Better resource management

Debugging

  • Errors are isolated to specific agents
  • Easier to log and trace execution
  • Clear boundaries for testing

Part 7: Common Issues & Solutions

Issue 1: "Function search is not found"

Symptoms:

Error: Function 'search' is not found in the available tools
Enter fullscreen mode Exit fullscreen mode

Cause: Mixing GoogleSearchTool() directly with other FunctionTools

Solution: Use the Agent-as-Tool pattern

# Correct
search_agent = LlmAgent(tools=[GoogleSearchTool()])
main_agent = LlmAgent(tools=[AgentTool(agent=search_agent)])

# Incorrect
agent = LlmAgent(tools=[GoogleSearchTool(), FunctionTool(my_func)])
Enter fullscreen mode Exit fullscreen mode

Issue 2: Agent Doesn't Search When It Should

Symptoms: Agent gives generic responses without using search

Cause: Instruction doesn't clearly indicate when to search

Solution: Be explicit and use imperative language

instruction="""
When asked about CURRENT events, RECENT news, or TIME-SENSITIVE 
information, you MUST use the search capability.

Keywords that require search:
- "latest", "recent", "today", "this week"
- "current", "now", "happening"
- Any specific dates or timeframes
"""
Enter fullscreen mode Exit fullscreen mode

Issue 3: API Key Not Found

Symptoms: ValueError: GOOGLE_API_KEY not found

Solution:

# Check .env file exists and has correct format
cat .env

# Verify it loads correctly
python -c "from dotenv import load_dotenv; load_dotenv(); import os; print(os.getenv('GOOGLE_API_KEY'))"

# Make sure no extra quotes or spaces
# βœ… Correct: GOOGLE_API_KEY=AIzaSy...
# ❌ Wrong: GOOGLE_API_KEY="AIzaSy..."  (remove quotes)
Enter fullscreen mode Exit fullscreen mode

Issue 4: ModuleNotFoundError

Symptoms: ModuleNotFoundError: No module named 'google.adk'

Solution:

# Ensure virtual environment is activated
which python  # Should show path to venv/bin/python

# If not activated:
source venv/bin/activate

# Reinstall dependencies
pip install --upgrade pip
pip install -r requirements.txt

# Verify installation
pip show google-adk
Enter fullscreen mode Exit fullscreen mode

Issue 5: Slow Search Responses

Symptoms: Queries take 5-10 seconds to respond

Explanation: This is normal! Search grounding involves:

  • Executing web searches (1-2 seconds)
  • Retrieving and processing results (1-2 seconds)
  • LLM synthesis (1-2 seconds)

Optimization tips (for later):

  • Cache common queries
  • Use streaming responses to show progress
  • Set timeout limits for searches
  • Implement async patterns for better UX

Issue 6: Empty Search Results

Symptoms: Agent says "I couldn't find information"

Causes:

  • Query is too vague
  • Search terms are too specific/obscure
  • Network issues

Solution:

# Add fallback instructions
instruction="""
...
If search returns no results:
1. Try rephrasing the search query
2. Use broader search terms
3. Inform the user politely and suggest alternative queries
"""
Enter fullscreen mode Exit fullscreen mode

Part 8: Production-Ready Enhancements

8.1 Add Comprehensive Logging

import logging

# Configure detailed logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler("agent.log"),
        logging.StreamHandler()
    ]
)
logger = logging.getLogger(__name__)

# Add callbacks for visibility
def log_before_agent(ctx):
    logger.info(f"Agent called: {ctx.agent.name}")
    if hasattr(ctx, 'user_message'):
        logger.info(f"Query: {ctx.user_message}")

def log_after_agent(ctx):
    logger.info(f"Agent completed: {ctx.agent.name}")

# Apply to agents
search_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='SearchAgent',
    tools=[GoogleSearchTool()],
    before_agent_callback=log_before_agent,
    after_agent_callback=log_after_agent,
)
Enter fullscreen mode Exit fullscreen mode

View logs:

tail -f agent.log
Enter fullscreen mode Exit fullscreen mode

8.2 Add Error Handling

async def run_query_safely(query: str):
    """Run query with error handling"""
    try:
        session = session_service.create_session_sync()
        response_text = ""

        async for event in runner.run_async(
            user_message=query,
            session_id=session.session_id
        ):
            if hasattr(event, 'content') and event.content:
                for part in event.content.parts:
                    if hasattr(part, 'text'):
                        response_text += part.text

        return response_text

    except Exception as e:
        logger.error(f"Error running query: {e}")
        return f"Sorry, I encountered an error: {str(e)}"
Enter fullscreen mode Exit fullscreen mode

8.3 Add Rate Limiting (For Production)

from datetime import datetime, timedelta

class RateLimiter:
    def __init__(self, max_requests=10, time_window=60):
        self.max_requests = max_requests
        self.time_window = time_window  # seconds
        self.requests = []

    def can_proceed(self):
        now = datetime.now()
        cutoff = now - timedelta(seconds=self.time_window)

        # Remove old requests
        self.requests = [r for r in self.requests if r > cutoff]

        # Check limit
        if len(self.requests) < self.max_requests:
            self.requests.append(now)
            return True
        return False

# Usage
limiter = RateLimiter(max_requests=10, time_window=60)

async def run_query_with_limit(query: str):
    if not limiter.can_proceed():
        return "Rate limit exceeded. Please wait a moment."

    return await run_query(query)
Enter fullscreen mode Exit fullscreen mode

Complete Working Code

Here's the full agent.py with all enhancements:

"""
ADK Search-Grounded Agent - Complete Implementation
Part 1/5 of the ADK Production Agent Series
"""

from dotenv import load_dotenv
load_dotenv()

import os
import asyncio
import logging
from datetime import datetime

# ADK imports
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool, AgentTool
from google.adk.tools.google_search_tool import GoogleSearchTool
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner

# Setup logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler("agent.log"),
        logging.StreamHandler()
    ]
)
logger = logging.getLogger(__name__)

# Verify API key
if not os.getenv('GOOGLE_API_KEY'):
    raise ValueError("GOOGLE_API_KEY not found")

# Custom tool
def get_current_time() -> str:
    """Returns the current date and time"""
    return f"Current time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"

# Callbacks for logging
def log_before_agent(ctx):
    logger.info(f"Agent: {ctx.agent.name}")

# Search agent
search_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='SearchAgent',
    instruction="""You are a specialist in Google Search grounding.
    Use web search to find current information and cite sources.""",
    tools=[GoogleSearchTool()],
    before_agent_callback=log_before_agent,
)

# Main agent
main_agent = LlmAgent(
    model='gemini-2.5-flash',
    name='MainAgent',
    instruction="""You are a helpful AI assistant.
    When asked about CURRENT events or RECENT information, 
    use search to provide accurate, source-attributed responses.""",
    tools=[
        AgentTool(agent=search_agent),
        FunctionTool(get_current_time),
    ],
    before_agent_callback=log_before_agent,
)

# Setup runner
session_service = InMemorySessionService()
runner = Runner(
    agent=main_agent,
    session_service=session_service,
    app_name="search_agent_app"
)

async def run_query(query: str):
    """Run a single query"""
    print(f"\n{'='*60}")
    print(f"πŸ” QUERY: {query}")
    print(f"{'='*60}\n")

    session = session_service.create_session_sync()
    response_text = ""

    async for event in runner.run_async(
        user_message=query,
        session_id=session.session_id
    ):
        if hasattr(event, 'content') and event.content:
            for part in event.content.parts:
                if hasattr(part, 'text') and part.text:
                    response_text += part.text

    print(f"RESPONSE:\n{response_text}\n")
    print(f"{'-'*60}\n")

async def main():
    """Run test suite"""
    print("\n" + "="*60)
    print("ADK Search-Grounded Agent - Test Suite")
    print("="*60 + "\n")

    await run_query("What are the latest AI developments this week?")
    await run_query("What's today's top tech news?")
    await run_query("What's the current time?")

    print("\nTests completed!\n")

if __name__ == "__main__":
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Run it:

python agent.py
Enter fullscreen mode Exit fullscreen mode

What's Next?

πŸŽ‰ Congratulations! You've built an AI agent with real-time search grounding!

Coming in Part 2: Google Maps Grounding

We'll add location intelligence to your agent:

  • Set up Vertex AI for Maps grounding
  • Handle location queries: "Find restaurants near Times Square"
  • Get directions and distances
  • Combine search and maps in one agent

The Complete 5-Part Series

  • βœ… Part 1: Google Search Grounding (completed!)
  • Part 2: Google Maps Grounding with Vertex AI
  • Part 3: Full-Stack Frontend with CopilotKit & AG-UI
  • Part 4: Persistent Sessions with PostgreSQL & Docker
  • Part 5: Production Deployment on Cloud Run

By Part 5, you'll have a production-ready, scalable AI agent deployed to the cloud with persistent sessions and a beautiful UI!


Key Takeaways

βœ… Search grounding solves the knowledge cutoff problem

βœ… Agent-as-Tool pattern is mandatory for grounding tools

βœ… Never mix GoogleSearchTool() directly with FunctionTools

βœ… Dedicated agents for each grounding tool prevent conflicts

βœ… Clear instructions tell the agent when to search

βœ… Grounding metadata provides transparency and trust


Resources


Questions?

Drop a comment below! I spent 3 hours debugging the "Function search is not found" error, so I'm happy to help you skip that pain πŸ˜…

⭐ Star the repo if this helped you, and follow me for Part 2 where we add Google Maps grounding!


About the Author: I'm Claret, an AI Engineer building production agentic systems. I lead a study group of 30+ engineers exploring AI agents and system design.

Connect with me on LinkedIn | GitHub

Top comments (0)