This tutorial will walk you through creating a minimal, working chatbot from scratch in just a few steps. By the end, you'll have a functional web application that users can interact with to learn about climate topics.
What We're Building
We'll create a chat interface where users can type questions about climate and sustainability topics and receive intelligent responses powered by Google's Gemini AI. This first version focuses on getting the core functionality working. We'll keep it simple with no memory between conversations and no complex features, just a clean foundation we can build upon.
Prerequisites
Before we start, make sure you have:
- Python 3.7 or higher installed
- A Google AI API key for Gemini (get one from Google AI Studio)
- Basic familiarity with terminal/command line
- A code editor like VS Code (recommended for beginners)
Good News About Costs: Gemini offers a generous free tier that includes up to 15 requests per minute and 1,500 requests per day, which is perfect for learning and building personal projects like this chatbot!
Step 1: Setting Up Your Development Environment
First, let's create a dedicated project folder and set up a clean Python environment:
# Create and navigate to project folder
mkdir climate-chatbot
cd climate-chatbot
# Create virtual environment
python -m venv .venv
# Activate virtual environment
# On Mac/Linux:
source .venv/bin/activate
# On Windows (PowerShell):
# .venv\Scripts\Activate.ps1
# On Windows (Command Prompt):
# .venv\Scripts\activate
Now install the required dependencies:
# Upgrade pip and install libraries
pip install --upgrade pip
pip install streamlit python-dotenv google-generativeai
Setting Up Your Project Files
Open your project folder in VS Code (or your preferred code editor) and create these files:
-
app.py
(our main application) -
requirements.txt
(for easy dependency management) -
.env
(for storing your API key securely) -
.gitignore
(to prevent sensitive files from being uploaded to version control)
Your folder structure should look like this:
climate-chatbot/
├── .venv/ (virtual environment folder)
├── app.py (main application file)
├── requirements.txt (dependencies list)
├── .env (API key storage)
└── .gitignore (files to ignore in git)
Getting Your Gemini API Key
Gemini offers excellent free models that are perfect for our chatbot:
- Navigate to Google AI Studio: Go to https://aistudio.google.com/apikey
- Sign up or log in with your Google account
- Create API Key: Click the "Create API Key" button
- Copy your key: Save this key securely - you'll need it in the next step
Free Tier Benefits:
- 15 requests per minute
- 1,500 requests per day
- Access to powerful models like
gemini-1.5-flash
- No credit card required to start
This free allowance is more than enough for learning, development, and even small production applications!
Step 2: Configuring Your API Key
In your .env
file, add your API key:
GEMINI_API_KEY=your_actual_api_key_here
Security Note: Never commit your .env
file to version control.
Create a .gitignore File
In your .gitignore
file, add these lines to keep sensitive files safe:
.env
.venv/
__pycache__/
*.pyc
This ensures your API keys and virtual environment won't accidentally be shared if you upload your project to GitHub.
Step 3: Building the Chat Application
Now for the main event—creating our chatbot application. Open your app.py
file and add the following code:
# app.py
import os
from dotenv import load_dotenv
load_dotenv()
import streamlit as st
import google.generativeai as genai
# Configure the Gemini API
API_KEY = os.getenv("GEMINI_API_KEY")
genai.configure(api_key=API_KEY)
# Configure Streamlit page
st.set_page_config(page_title="Climate Helper Chatbot", layout="centered")
st.title("🌱 Climate Helper Chatbot")
st.subheader("Your AI assistant for climate, solar, and sustainability questions")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = [
{
"role": "assistant",
"content": "Hi! I'm your climate helper. Ask me anything about solar energy, sustainability, or climate science. How can I help you today?"
}
]
def display_messages():
"""Display all messages in the chat history"""
for msg in st.session_state.messages:
author = "user" if msg["role"] == "user" else "assistant"
with st.chat_message(author):
st.write(msg["content"])
def friendly_wrap(raw_text):
"""Add a friendly tone to AI responses"""
return (
"Great question! 🌱\n\n"
f"{raw_text.strip()}\n\n"
"Would you like me to elaborate on any part of this, or do you have other climate questions?"
)
# Display existing messages
display_messages()
# Handle new user input
prompt = st.chat_input("Ask me about climate, solar installations, sustainability...")
if prompt:
# Add user message to history
st.session_state.messages.append({"role": "user", "content": prompt})
# Show user message
with st.chat_message("user"):
st.write(prompt)
# Show thinking indicator while processing
with st.chat_message("assistant"):
placeholder = st.empty()
placeholder.write("🤔 Thinking...")
# Call Gemini API
try:
model = genai.GenerativeModel('gemini-1.5-flash')
response = model.generate_content(
f"You are a helpful climate and sustainability expert. Please provide accurate, encouraging information about: {prompt}"
)
# Extract response text
answer = response.text
friendly_answer = friendly_wrap(answer)
except Exception as e:
friendly_answer = f"I'm sorry, I encountered an error: {e}. Please try asking your question again."
# Replace thinking indicator with actual response
placeholder.write(friendly_answer)
# Add assistant response to history
st.session_state.messages.append({"role": "assistant", "content": friendly_answer})
# Refresh the page to show updated chat
st.rerun()
Step 4: Running Your Chatbot
Important: Save all your files before running the application!
With your code in place, it's time to see your chatbot in action:
streamlit run app.py
Streamlit will start a local server and provide a URL (typically http://localhost:8501
). Open this URL in your browser, and you'll see your climate chatbot ready to answer questions!
Optional: Create a Requirements File
For easy project sharing and deployment, create a requirements.txt
file with:
streamlit==1.28.0
python-dotenv==1.0.0
google-generativeai==0.3.0
This allows others to install all dependencies with pip install -r requirements.txt
.
Understanding the Code
Let's break down what our application does:
Environment Setup: We use python-dotenv
to securely load our API key from the .env
file.
Streamlit Interface: The st.chat_input()
and st.chat_message()
functions create a modern chat interface that feels familiar to users.
Session State: Streamlit's session state keeps track of the conversation history, so users can see their previous questions and the bot's responses.
Gemini Integration: We configure the Google Generative AI library with our API key, then create a GenerativeModel
instance using the free gemini-1.5-flash
model to generate responses to user questions.
Friendly Responses: The friendly_wrap()
function adds encouraging language to make the bot feel more helpful and engaging.
Troubleshooting Common Issues
Authentication Errors: Double-check that your GEMINI_API_KEY
is correctly set in your .env
file and that the key is valid.
Import Errors: If you encounter issues with the Google client import, try updating the package: pip install --upgrade google-generativeai
Model Availability Issues: If you get a "model not found" error, try these free alternatives:
-
gemini-1.5-flash
(fastest, recommended for development) -
gemini-1.5-pro
(more capable but slower, still free within limits)
You can also list available models by adding this code temporarily:
for model in genai.list_models():
if 'generateContent' in model.supported_generation_methods:
print(model.name)
Free Tier Limits: If you hit rate limits, the free tier allows:
- 15 requests per minute
- 1,500 requests per day
- Simply wait a moment and try again if you exceed these limits
Testing Your Chatbot
Try asking your chatbot questions like:
- "What are the benefits of solar panels for homes?"
- "How can I reduce my carbon footprint?"
- "What's the difference between renewable and clean energy?"
You should see informative, friendly responses that encourage further learning.
What's Next?
Congratulations! You've built a working climate chatbot. This foundation opens up many possibilities for enhancement:
- Memory: Add conversation memory so the bot remembers context from earlier in the chat
- RAG (Retrieval Augmented Generation): Connect to climate databases for more specific, up-to-date information
- UI Improvements: Add custom styling, charts, or interactive elements
- Specialized Knowledge: Fine-tune responses for specific topics like solar installation or carbon accounting.
You can find the full source code, including additional features and improvements, in the GitHub repository. Feel free to fork it,or use it as a starting point for your own projects!
Connect with me on LinkedIn - I'm always excited to chat about AI, Software Development, sustainability, and how technology can help solve our climate challenges.
Top comments (0)