Reference articles:
- Gemini API tooling updates: context circulation, tool combos and Maps grounding for Gemini 3
- Google Places API (New) - searchNearby
- GitHub: linebot-spot-finder
- Complete code GitHub (Meeting Helper LINE Bot Spot Finder)
Background
The combination of LINE Bot + Gemini is already very common. Whether it's using Google Search Grounding to let the model look up real-time information or using Function Calling to let the model call custom logic, they are both mature when used alone.
But what if you want to achieve both "map location context" and "query real ratings" in the same question?
Taking restaurant search as an example, the traditional approach usually looks like this:
User: "Help me find a hot pot restaurant nearby with a rating of 4 stars or above"
Solution A (using only Maps Grounding):
Gemini has map context, but the rating information is described by AI itself, and accuracy is not guaranteed.
Solution B (using only Places API):
You can get real ratings, but there is no map context, and Gemini doesn't know where the user is.
To have both, you usually need to make two API calls, or manually connect them yourself.
AI can search maps and call external APIs, but doing both in a single call—has always been an awkward blank in the old Gemini API architecture.
Until March 17, 2026, Google released Gemini API Tooling Updates (by Mariano Cocirio), which provided an official solution to this problem.
What are Tool Combinations?
Google announced three core features in this update:
1. Tool Combinations Developers can now attach built-in tools (such as Google Search, Google Maps) and custom Function Declarations simultaneously in a single Gemini API call. The model decides which tool to call and when to call it, and finally integrates the results to generate an answer.
2. Maps Grounding Gemini can now directly perceive map data, not just text descriptions of "location", but truly has spatial context—knowing where the user is and what's nearby.
3. Context Circulation Allows the context between multi-turn tool calls to flow naturally, and the model can fully remember the results of the first tool call when making the second call.
The key to this change is:
# Old approach (two tools cannot coexist)
types.Tool(google_search=types.GoogleSearch())
types.Tool(function_declarations=[MY_FN])
# New approach (the same Tool object, both coexist)
types.Tool(
google_maps=types.GoogleMaps(),
function_declarations=[MY_FN],
)
One line of modification opens up a whole new combination method.
Project Goal
This time, I used Tool Combinations to transform the existing linebot-spot-finder, upgrading it from "only Maps Grounding for rough answers" to "Google Maps context + Places API real data":
After the user sends their GPS location, they enter: "Please find a hot pot restaurant with a rating of 4 stars or above, suitable for group dining, and list the name, address, and review summary."
Bot (old Maps Grounding): "There are several hot pot restaurants nearby, and the ratings are good." (AI describes it itself, which may not be accurate)
Bot (new Tool Combo): "Lao Wang Hot Pot | 100 Shimin Avenue, Xinyi District, Taipei City | Rating 4.6 (312) | Reviews: Large portions, great value for money, suitable for group dining; efficient service, fast serving."
The difference is: Gemini now receives both map context (where you are) and the real structured data (rating numbers, review text) from the Places API, so the answer changes from a "vague description" to "informed information".
Architecture Design
Overall Message Flow
LINE User sends GPS location
│
▼
handle_location() → session.metadata stores lat/lng
│
└──► Returns Quick Reply (restaurant / gas station / parking lot)
LINE User sends text question (e.g. "Find a hot pot restaurant with a rating of 4 stars or above")
│
▼
handle_text()
│
├── session has lat/lng?
│ Yes → tool_combo_search(query, lat, lng) ← Focus of this article
│ No → fallback: Gemini Chat + Google Search
│
└──► Returns natural language answer
Tool Combo Agentic Loop
tool_combo_search(query, lat, lng)
│
▼
Step 1: generate_content()
tools = [google_maps + search_nearby_restaurants]
│
▼
response.candidates[0].content.parts has function_call?
╱ ╲
Yes No
│ │
▼ ▼
_execute_function() Directly returns response.text
→ _call_places_api()
(Places API searchNearby)
Returns rating, address, reviews
│
▼
Collect into a single Content(role="user")
Add to history
│
▼
Step 3: generate_content(contents=history)
Gemini integrates map context + Places data
│
▼
Returns final.text
Why not put lat/lng in Function Declaration?
This is an important design decision.
If you add lat/lng to the parameters of SEARCH_NEARBY_RESTAURANTS_FN, Gemini will fill in the coordinates itself—but it fills in the "approximate location" inferred from the conversation, not the user's actual GPS coordinates, and the error can be as high as several kilometers.
The correct approach is to let the Python dispatcher extract the precise coordinates from session.metadata and inject them:
def _execute_function(name: str, args: dict, lat: float, lng: float):
if name == "search_nearby_restaurants":
return _call_places_api(
lat=lat, lng=lng, # ← Inject from session, don't let Gemini guess
keyword=args.get("keyword", ""),
min_rating=float(args.get("min_rating", 4.0)),
)
Core Code Details
Step 1: Define Function Declaration
from google.genai import types
SEARCH_NEARBY_RESTAURANTS_FN = types.FunctionDeclaration(
name="search_nearby_restaurants",
description=(
"Search for nearby restaurants using Google Places API, and return the rating, address, and user reviews."
"lat/lng is automatically included by the system and does not need to be provided."
),
parameters=types.Schema(
type=types.Type.OBJECT,
properties={
"keyword": types.Schema(
type=types.Type.STRING,
description="Restaurant type or keyword, such as: hot pot, hot pot, Italian",
),
"min_rating": types.Schema(
type=types.Type.NUMBER,
description="Minimum rating threshold (1–5), default 4.0",
),
"radius_m": types.Schema(
type=types.Type.INTEGER,
description="Search radius (meters), default 1000",
),
},
),
)
The description clearly tells the model "lat/lng is included by the system", avoiding the model filling in the coordinates itself in the args.
Step 2: Places API Call
import httpx
PLACES_API_URL = "https://places.googleapis.com/v1/places:searchNearby"
PLACES_FIELD_MASK = (
"places.displayName,"
"places.rating,"
"places.userRatingCount,"
"places.formattedAddress,"
"places.reviews"
)
def _call_places_api(lat, lng, keyword="", min_rating=4.0, radius_m=1000):
body = {
"includedTypes": ["restaurant"],
"maxResultCount": 5,
"locationRestriction": {
"circle": {
"center": {"latitude": lat, "longitude": lng},
"radiusMeters": radius_m,
}
},
}
response = httpx.post(
PLACES_API_URL,
headers={
"X-Goog-Api-Key": os.getenv("GOOGLE_MAPS_API_KEY"),
"X-Goog-FieldMask": PLACES_FIELD_MASK,
},
json=body,
timeout=10.0,
)
response.raise_for_status()
data = response.json()
restaurants = []
for place in data.get("places", []):
rating = place.get("rating", 0)
if rating < min_rating:
continue
reviews = [
r["text"]["text"]
for r in place.get("reviews", [])[:3]
if r.get("text", {}).get("text")
]
restaurants.append({
"name": place["displayName"]["text"],
"address": place.get("formattedAddress", ""),
"rating": rating,
"rating_count": place.get("userRatingCount", 0),
"reviews": reviews,
})
return {"restaurants": restaurants}
Step 3: Tool Combo Main Function (Agentic Loop)
async def tool_combo_search(query: str, lat: float, lng: float) -> str:
client = genai.Client(
vertexai=True,
project=os.getenv("GOOGLE_CLOUD_PROJECT"),
location=os.getenv("GOOGLE_CLOUD_LOCATION", "us-central1"),
http_options=types.HttpOptions(api_version="v1"),
)
enriched_query = (
f"User's current location: latitude {lat}, longitude {lng}.\n"
f"Please answer in traditional Chinese using Taiwanese terminology, and do not use markdown format.\n\n"
f"Question: {query}"
)
tool_config = types.GenerateContentConfig(
tools=[
types.Tool(
google_maps=types.GoogleMaps(), # ← Maps grounding
function_declarations=[SEARCH_NEARBY_RESTAURANTS_FN], # ← Places API
)
],
)
# ── Step 1 ──────────────────────────────────────────────────────
response = client.models.generate_content(
model=TOOL_COMBO_MODEL,
contents=enriched_query,
config=tool_config,
)
if not response.candidates:
return response.text or "(Unable to get a reply)"
history = [
types.Content(role="user", parts=[types.Part(text=enriched_query)]),
response.candidates[0].content,
]
# ── Step 2:Processing function_call ──────────────────────────────────
function_response_parts = []
for part in response.candidates[0].content.parts:
if part.function_call:
fn = part.function_call
result = _execute_function(fn.name, dict(fn.args or {}), lat, lng)
function_response_parts.append(
types.Part(
function_response=types.FunctionResponse(
id=fn.id, name=fn.name, response=result,
)
)
)
if function_response_parts:
history.append(types.Content(role="user", parts=function_response_parts))
# ── Step 3 ────────────────────────────────────────────────────
final = client.models.generate_content(
model=TOOL_COMBO_MODEL,
contents=history,
config=tool_config,
)
return final.text or "(Unable to get a reply)"
return response.text or "(Unable to get a reply)"
Pitfalls Encountered
❌ Pitfall 1: Part.from_function_response() does not accept the id parameter
This is the easiest pitfall to step into this time, and the error only explodes when real model calls are made, and unit tests almost never detect it.
Originally, I wrote it like this, referring to the official example:
# ❌ Error——TypeError occurs at runtime
types.Part.from_function_response(
id=fn.id, # ← This parameter does not exist!
name=fn_name,
response=result,
)
The actual signature of from_function_response is:
(*, name: str, response: dict, parts: Optional[list] = None) -> Part
There is no id parameter at all. Every time the model actually triggers a function_call, the program will throw a TypeError at this line, and then silently enter the except of Step 3, returning an error message, and the results of the Places API are never truly returned to Gemini.
The correct way is to directly construct types.FunctionResponse:
# ✅ Correct
types.Part(
function_response=types.FunctionResponse(
id=fn.id,
name=fn_name,
response=result,
)
)
You can immediately confirm the parameter list with python -c "from google.genai import types; help(types.Part.from_function_response)".
❌ Pitfall 2: include_server_side_tool_invocations=True causes Pydantic to explode
I thought I should add this parameter after seeing the official documentation example:
# ❌ Error
types.GenerateContentConfig(
tools=[...],
include_server_side_tool_invocations=True, # ← The installed SDK version does not support it
)
In google-genai 1.49.0, this field is not in the model fields of GenerateContentConfig, and Pydantic will directly throw an extra_forbidden validation error. Just remove it, and the function is completely normal.
❌ Pitfall 3: textQuery is a parameter of searchText, not searchNearby
I thought "if there is a keyword, then bring it into the Places API", and intuitively added it to the request body:
# ❌ Error——Invalid field for searchNearby endpoint
if keyword:
body["textQuery"] = keyword
searchNearby only accepts fields such as includedTypes, locationRestriction; textQuery is a parameter of the searchText endpoint. Adding this field will not report an error (in some versions), but the keyword will not take effect at all.
The correct approach is to leave the keyword in the description of the Function Declaration for Gemini to refer to, let the model translate the intent to enriched_query, let Maps Grounding handle the keyword semantics, and Places API is only responsible for returning real rating data.
❌ Pitfall 4: No guard for response.candidates[0]
When the model encounters security filtering, RECITATION, or other abnormal termination, candidates may be an empty list, and then directly response.candidates[0] is IndexError.
# ❌ No guard
history = [
types.Content(role="user", parts=[types.Part(text=enriched_query)]),
response.candidates[0].content, # ← If candidates is empty, it will explode
]
# ✅ Add guard
if not response.candidates:
return response.text or "(Unable to get a reply)"
history = [...]
Demo Display
Scenario 1: "Find a hot pot restaurant with a rating of 4 stars or above for group dining"
User sends: GPS location (Xinyi District, Taipei City, 25.0441, 121.5598)
User enters: "Please find a hot pot restaurant with a rating of 4 stars or above, suitable for group dining, and list the name, address, and review summary."
[Step 1: Gemini receives query + map context]
→ Detects the need for restaurant data, emit function_call:
search_nearby_restaurants(keyword="hot pot", min_rating=4.0)
[Step 2: Python calls Places API]
→ lat=25.0441, lng=121.5598 injected from session
→ Returns 3 restaurants with a rating ≥ 4.0, including review text
[Step 3: Gemini integrates Maps context + Places data]
→ "Lao Wang Hot Pot|100 Shimin Avenue, Xinyi District|⭐ 4.6 (312)
Review summary: Large portions, great value for money, a top choice for friends to dine; fast service, fresh dishes.
... (3 restaurants in total)"
Scenario 2: "Are there any high-value Japanese restaurants?"
User enters: "Are there any high-value Japanese restaurants nearby?"
[Step 1: Gemini]
→ function_call: search_nearby_restaurants(keyword="Japanese cuisine", min_rating=4.0)
[Step 2: Places API]
→ Returns 2 Japanese restaurants that meet the rating criteria
[Step 3: Gemini]
→ "There are two recommendations:
Washoku ○○|...|⭐ 4.4|Reviews: Weekday lunch set is only 280 yuan, very fresh.
..."
Demo Script Quick Test
No need for LINE Bot, directly on the local machine:
# Only test Tool Combo (main function)
python demo.py combo
# Run all three functions
python demo.py all
Old Architecture vs. New Architecture
| Old Architecture (Maps Grounding only) | New Architecture (Tool Combo) | |
|---|---|---|
| Tool |
google_maps (built-in) |
google_maps + search_nearby_restaurants (custom) |
| Rating Data | Gemini describes it itself (may not be accurate) | Places API real numbers |
| Reviews | AI generated | Real user reviews (up to 3) |
| API Call Count | 1 time | 1 time (Step1) + 1 time (Step3) = 2 times, but transparent to the user |
| Accuracy | Medium | High |
| Custom Filtering | Rely on prompt |
min_rating, radius_m precise control |
Analysis and Outlook
This implementation has given me a clearer understanding of the potential of Gemini Tool Combinations.
The problem that Tool Combinations truly solves is that Grounding and Function Calling are no longer mutually exclusive. Previously, to achieve "map context + real external data", you could only manually connect two APIs yourself at the application layer, or use Gemini's text generation to "simulate" external data (unreliable). Now the model itself knows when to use map context and when to call the Places API, and developers only need to attach the tools.
However, there are also a few things to note about this implementation:
lat/lnginjection mode is very important: You can't let the model guess the coordinates itself, you must inject them from the session, otherwise the positioning accuracy will be very poor. This mode also applies to all function calling scenarios that "have session status".The cost of two
generate_contentcalls: The agentic loop of Tool Combo requires two model calls, and the token consumption is about 1.5–2 times that of a single call. This needs to be especially considered for scenarios with high latency requirements.SDK version differences: Different versions of
google-genaihave different support for the fields ofGenerateContentConfig, and new fields likeinclude_server_side_tool_invocationsshould be used after confirming the version number, otherwise Pydantic validation errors are hard to track.
Future directions that can be extended:
- Connect the Postback quick replies (click the "Find Restaurant" button) to Tool Combo, so that each entry can get real ratings
- Add the
searchTextendpoint to support more complex keyword searches (e.g. Michelin recommendations) - Tool Combo combined with other built-in tools (such as
google_search) to achieve more complex multi-tool chaining
Summary
The core concept of this modification is only one sentence: Put Google Maps grounding and the Places API function tool in the same types.Tool, and Gemini will coordinate the two in a single conversation.
The key code is only these few lines:
# This is all the magic of Tool Combo
types.Tool(
google_maps=types.GoogleMaps(), # ← Maps context
function_declarations=[SEARCH_NEARBY_RESTAURANTS_FN], # ← Places API
)
But to make it really work, you also need to pay attention to: the construction method of FunctionResponse, the guard of candidates, the correct fields of the Places API endpoint, and the injection of lat/lng from the session instead of letting the model guess.
The complete code is on GitHub, feel free to clone and play with it.
See you next time!



Top comments (0)