DEV Community

Cover image for 🌤️ Building a Gemini-Powered Conversational Weather Agent with Maxim Logging
Akshit Madan
Akshit Madan

Posted on • Originally published at getmaxim.ai

🌤️ Building a Gemini-Powered Conversational Weather Agent with Maxim Logging

“How’s the weather today in Delhi?”
Simple question — but what if we wanted a conversational AI that could answer it, explain the temperature trend, and log every detail of its interaction for analysis?

Agentic systems are booming. But building a reliable production-ready AI agent involves more than just connecting an LLM. You need tool usage, observability, error handling, and real-time responses. In this guide, we’ll build a Gemini AI-powered weather assistant that does all this, with observability powered by Maxim and real-time data fetched via Tavily.

The Problem

Conversational agents are great, until something breaks and you don’t know why. Whether it’s tool errors, vague responses, or inflated costs, most agent pipelines lack end-to-end monitoring.

That’s where Maxim comes in, an observability SDK for LLM-based apps. And with Google Gemini’s new APIs, you can now create agents that reason well and support tool usage. Let’s bring these together and solve a common use case: a weather bot.

What We’ll Build

We’ll implement a Gemini-powered conversational agent that:

  • Responds to user messages in a natural chat format

  • Uses Tavily to fetch current weather for any location

  • Logs all traces, latency, costs, and tool usage into the Maxim Dashboard

  • Offers both manual and automatic trace management

We have also published a video explanation of this tutorial here -

Prerequisites

Make sure you’ve got:

  • Python 3.8 or above

  • API keys for:

  • Basic comfort with Python and REST APIs

Step 1: Install Dependencies

Install all required libraries:

pip install google-generativeai 
pip install maxim-py 
pip install tavily-python 
pip install python-dotenv
Enter fullscreen mode Exit fullscreen mode

Step 2: Configure Environment

Create and load environment variables for API keys:

import os 
from dotenv import load_dotenv 

load_dotenv() 
os.environ["GEMINI_API_KEY"] = "your_gemini_api_key" 
os.environ["MAXIM_API_KEY"] = "your_maxim_api_key" 
os.environ["TAVILY_API_KEY"] = "your_tavily_api_key" 
os.environ["MAXIM_REPO_ID"] = "your_maxim_repo_id"
Enter fullscreen mode Exit fullscreen mode

These keys are loaded and set as environment variables so they can be accessed throughout your app without hardcoding them again.

Add validation to avoid misconfigurations:

required_vars = ["MAXIM_API_KEY", "MAXIM_REPO_ID", "GEMINI_API_KEY", "TAVILY_API_KEY"]
for var in required_vars:
    if not os.getenv(var):
        print(f"❌ {var} is missing")
    else:
        print(f"✅ {var} loaded")
Enter fullscreen mode Exit fullscreen mode

This ensures that none of the required API keys are missing. It prevents runtime errors due to misconfiguration.

Step 3: Set Up Maxim Logging

from maxim import Maxim, LoggerConfig 
logger = Maxim().logger()
Enter fullscreen mode Exit fullscreen mode

This initializes the Maxim logger for your specific repository ID. From here on, all traces will be pushed to this log repo for tracking agent activity, latency, tool usage, and more.

Step 4: Configure Gemini Client (with Maxim)

from google import genai
from maxim.logger.gemini import MaximGeminiClient

gemini_client = MaximGeminiClient(
    client=genai.Client(api_key=os.getenv("GEMINI_API_KEY")),
    logger=logger
)
Enter fullscreen mode Exit fullscreen mode

This wraps the Gemini client with Maxim’s logging layer. Now all requests/responses and tool calls are logged automatically.

Step 5: Set Up Tavily for Weather Retrieval

from tavily import TavilyClient 

tavily_client = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))
Enter fullscreen mode Exit fullscreen mode

Tavily will handle real-time weather searches. It searches the internet and returns snippets from trusted sources.

🌦️ Step 6: Define Weather Fetch Function

def get_weather_with_tavily(location: str) -> str:
    try:
        query = f"current weather in {location} today temperature conditions"
        results = tavily_client.search(query=query, search_depth="basic", max_results=3)

        weather_info = [
            r.get('content', '')[:200] + "..."
            for r in results.get('results', [])
            if any(word in r.get('content', '').lower() for word in ['temperature', 'weather', 'degrees', '°'])
        ]

        return f"Weather in {location}:\\n" + "\\n".join(weather_info[:2]) if weather_info else "No weather info found."

    except Exception as e:
        return f"Error: Could not retrieve weather for {location} — {str(e)}"
Enter fullscreen mode Exit fullscreen mode

This function:

  • Creates a search query using the location

  • Queries Tavily for relevant web content

  • Filters for useful weather details (temp, degrees, etc.)

  • Returns a clean response or a fallback message

Step 7: Create the Conversational Agent

This initializes the agent with:

  • Gemini client for conversation

  • Maxim logger for observability

  • Weather function for tool use

  • Conversation history to track context

from uuid import uuid4
from maxim.logger import TraceConfig

class ConversationalAgent:
    def __init__(self, gemini_client, logger, weather_function):
        self.gemini_client = gemini_client
        self.logger = logger
        self.weather_function = weather_function
        self.history = []

    def chat(self, message: str, use_external_trace: bool = False):
        print(f"User: {message}")
        self.history.append(f"User: {message}")

        try:
            config = {
                "tools": [self.weather_function],
                "system_instruction": "You're a helpful weather assistant. Use the weather tool when needed.",
                "temperature": 0.7,
            }

            if use_external_trace:
                trace = self.logger.trace(TraceConfig(id=str(uuid4())))
                response = self.gemini_client.models.generate_content(
                    model="gemini-2.0-flash",
                    contents=message,
                    config=config,
                    trace_id=trace.id
                )
                trace.end()
            else:
                response = self.gemini_client.models.generate_content(
                    model="gemini-2.0-flash",
                    contents=message,
                    config=config,
                )

            self.history.append(f"AI: {response.text}")
            print(f"AI: {response.text}")
            return response.text

        except Exception as e:
            error = f"❌ Error: {str(e)}"
            print(error)
            return error

    def get_history(self):
        return "\\n".join(self.history)

    def clear_history(self):
        self.history = []
        print("History cleared.")
Enter fullscreen mode Exit fullscreen mode

Step 8: Run a Test

Initializes the weather agent and tests it with a prompt. You’ll see a response from Gemini and logs in Maxim.

agent = ConversationalAgent(
    gemini_client=gemini_client,
    logger=logger,
    weather_function=get_weather_with_tavily
)

agent.chat("What's the weather in London today?")
Enter fullscreen mode Exit fullscreen mode

Step 9: Launch Interactive Chat Mode

This creates a CLI-based chat experience with basic commands for exiting, clearing, and reviewing the conversation.

def interactive_chat():
    print("Chat started. Type 'exit' to stop.")
    while True:
        msg = input("\\nUser: ").strip()
        if msg.lower() in ["exit", "quit", "bye"]:
            print("👋 Chat ended.")
            break
        elif msg.lower() == "history":
            print(agent.get_history())
        elif msg.lower() == "clear":
            agent.clear_history()
        else:
            agent.chat(msg)

interactive_chat()
Enter fullscreen mode Exit fullscreen mode

Image description

Step 10: Verify in Maxim Dashboard

Once running:

  1. Head to your Maxim dashboard

  2. View the repository logs

  3. Drill into trace IDs to view:

  • Tokens used

  • Response latency

  • Tool call usage

  • Full chat history

Image description

Advanced Options

Trace Modes

  • Automatic (default): Easier setup, no trace ID management

  • Manual: Add TraceConfig for full control (e.g., for long chains or external tracing)

Model Config

{
"temperature": 0.7,
"max_output_tokens": 1000,
"tools": [weather_function],
"system_instruction": "You are a weather assistant..."
}
Enter fullscreen mode Exit fullscreen mode




Final Thoughts

This tutorial shows how to build a real-world conversational agent, not just with AI, but with the operational maturity needed for production use.

✅ Gemini AI
✅ Tavily tool integration
✅ Maxim observability

By logging every interaction, response, tool call, and cost, you gain deep insight into how your agents behave and scale.

Try It Yourself

Looking to plug in your own tools or run multi-step workflows?

This architecture is modular, just replace the weather_function with another tool like search, stock lookup, or even a retrieval-augmented QA pipeline.

Happy building! ☀️🌧️⚡

Originally published at https://www.getmaxim.ai on June 13, 2025.

Top comments (0)