π LLM Zoomcamp Tutorial Series - Professional Function Calling & Conversational Agents
Welcome to Part 2 of our LLM Zoomcamp tutorial series! π Now that you understand the fundamentals from Part 1, we're ready to build professional-grade agentic systems using OpenAI Function Calling. This is where your assistant becomes truly intelligent! π§ β¨
π OpenAI Function Calling: The Professional Standard (LLM Zoomcamp Pro Level)
In Part 1, we used JSON parsing to handle agent decisions. While that works, OpenAI Function Calling is the professional standard used in production systems! π Think of it as upgrading from handwritten forms to a professional database system.
π― Why Function Calling is Superior (LLM Zoomcamp Best Practices)
- π‘οΈ Type Safety: Automatic validation of inputs and outputs
- π Documentation: Self-describing tools with clear parameters
- β‘ Performance: Optimized for structured interactions
- ποΈ Scalability: Easy to add new tools and capabilities
- π Reliability: Consistent format reduces parsing errors
Let's build it step by step! π¨
π§ Setting Up Advanced Function Tools (LLM Zoomcamp Implementation)
# π Enhanced imports for professional function calling
import json
import requests
from openai import OpenAI
from minsearch import AppendableIndex
from typing import Dict, List, Any, Optional
# π Initialize our OpenAI client (your gateway to ChatGPT)
client = OpenAI()
print("π§ Setting up LLM Zoomcamp Function Calling system!")
print("β¨ This is the professional way to build AI agents!")
π LLM Zoomcamp Explanation: We're importing typing
to make our code more professional. It's like adding labels to all your tools so everyone knows what they do! π·οΈ
π Enhanced Data Setup with Error Handling (LLM Zoomcamp Robustness)
def setup_llm_zoomcamp_data():
"""
π Professional data setup with error handling.
This function downloads and prepares our LLM Zoomcamp FAQ data
with proper error handling - just like real production systems!
Returns:
tuple: (documents list, search index) or (None, None) if failed
"""
try:
print("π Downloading LLM Zoomcamp FAQ data...")
# π₯ Download with timeout and error handling
docs_url = 'https://github.com/alexeygrigorev/llm-rag-workshop/raw/main/notebooks/documents.json'
response = requests.get(docs_url, timeout=30) # 30 second timeout
response.raise_for_status() # Raise exception for bad status codes
documents_raw = response.json()
print(f"β
Successfully downloaded {len(documents_raw)} courses!")
# π Process the data (same as Part 1, but with better logging)
documents = []
for course in documents_raw:
course_name = course['course']
for doc in course['documents']:
doc['course'] = course_name
documents.append(doc)
print(f"π Processed {len(documents)} FAQ documents total")
# ποΈ Create search index with error handling
index = AppendableIndex(
text_fields=["question", "text", "section"],
keyword_fields=["course"]
)
index.fit(documents)
print("π― Search index created successfully!")
return documents, index
except requests.RequestException as e:
print(f"β Network error downloading data: {e}")
return None, None
except json.JSONDecodeError as e:
print(f"β Error parsing JSON data: {e}")
return None, None
except Exception as e:
print(f"β Unexpected error: {e}")
return None, None
# π Initialize our LLM Zoomcamp system
documents, index = setup_llm_zoomcamp_data()
if documents and index:
print("π LLM Zoomcamp system ready!")
else:
print("π± System initialization failed!")
π LLM Zoomcamp Explanation: Professional systems always handle errors gracefully! Instead of crashing when something goes wrong, we catch problems and explain what happened. It's like having airbags in your car! ππ¨
π οΈ Building Professional Function Tools (LLM Zoomcamp Function Library)
Now let's create our function tools using OpenAI's professional format! These are like giving ChatGPT superpowers! π¦ΈββοΈ
# π Function 1: Professional FAQ Search
def search_faq(query: str, course: str = "data-engineering-zoomcamp") -> List[Dict[str, Any]]:
"""
π Search the LLM Zoomcamp FAQ database for relevant information.
This is like having a super-smart librarian who can instantly find
the most relevant course information for any question!
Args:
query (str): What to search for (e.g., "Docker installation")
course (str): Which course to search in (default: data-engineering-zoomcamp)
Returns:
List[Dict]: List of relevant FAQ entries with scores
"""
if not index:
print("β Search index not available!")
return []
print(f"π Searching for: '{query}' in course: {course}")
# π― Professional search with boosting
boost = {'question': 3.0, 'section': 0.5}
try:
results = index.search(
query=query,
filter_dict={'course': course},
boost_dict=boost,
num_results=5,
output_ids=True
)
print(f"β
Found {len(results)} relevant results")
return results
except Exception as e:
print(f"β Search error: {e}")
return []
# π Function 2: Professional Context Builder
def build_context_from_results(results: List[Dict[str, Any]]) -> str:
"""
π Build a professional context string from search results.
Think of this as organizing your research notes in the perfect format
for ChatGPT to understand and use effectively!
Args:
results (List[Dict]): Search results from search_faq function
Returns:
str: Beautifully formatted context string
"""
if not results:
return "No relevant information found in the FAQ database."
context_parts = []
for i, doc in enumerate(results, 1):
context_part = f"""
Document {i}:
Section: {doc.get('section', 'Unknown')}
Question: {doc.get('question', 'No question')}
Answer: {doc.get('text', 'No answer available')}
Relevance Score: {doc.get('score', 'N/A')}
""".strip()
context_parts.append(context_part)
final_context = "\n\n" + "\n\n".join(context_parts)
print(f"π Built context from {len(results)} documents")
return final_context
# π§ͺ Let's test our professional functions!
print("\nπ§ͺ Testing LLM Zoomcamp Professional Functions:")
print("="*60)
# Test search function
test_results = search_faq("How do I install Docker?")
if test_results:
print(f"π― First result: {test_results[0].get('question', 'No question')}")
# Test context builder
test_context = build_context_from_results(test_results)
print(f"π Context length: {len(test_context)} characters")
print(f"π Context preview: {test_context[:200]}...")
else:
print("π No results found for test query")
π LLM Zoomcamp Explanation: Our professional functions are like upgraded tools in a workshop. They have:
- π·οΈ Clear documentation (docstrings explaining what they do)
- π‘οΈ Error handling (they don't break when things go wrong)
- π Type hints (Python knows what kind of data to expect)
- β Validation (they check if everything is working properly)
π OpenAI Function Definitions (LLM Zoomcamp Professional Format)
Now comes the magic! We need to tell OpenAI about our functions in a special format. This is like writing a manual for ChatGPT! π
# π Professional function definitions for OpenAI
llm_zoomcamp_tools = [
{
"type": "function",
"function": {
"name": "search_faq",
"description": """
π Search the LLM Zoomcamp FAQ database for course-related information.
Use this tool when students ask questions about:
- Course enrollment and requirements
- Technical setup (Docker, Python, environment)
- Assignment deadlines and procedures
- Course content and materials
- Troubleshooting technical issues
This tool searches through real FAQ data from data engineering courses.
""".strip(),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query. Be specific! Good: 'Docker installation on Windows', Bad: 'Docker'"
},
"course": {
"type": "string",
"description": "Which course to search in",
"enum": ["data-engineering-zoomcamp"],
"default": "data-engineering-zoomcamp"
}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "add_faq_entry",
"description": """
π Add a new entry to the LLM Zoomcamp FAQ database.
Use this tool when:
- A student asks a good question that isn't in the FAQ
- You want to save information for future students
- The conversation reveals a gap in our FAQ coverage
This helps improve the course materials for everyone! π
""".strip(),
"parameters": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question to add (should be clear and specific)"
},
"answer": {
"type": "string",
"description": "The detailed answer to the question"
},
"section": {
"type": "string",
"description": "Which course section this belongs to (e.g., 'setup', 'docker', 'assignments')"
}
},
"required": ["question", "answer", "section"]
}
}
}
]
print("π Created professional LLM Zoomcamp function definitions!")
print("β¨ ChatGPT now knows about our awesome tools!")
# π Let's see what we've created
for tool in llm_zoomcamp_tools:
func_name = tool['function']['name']
print(f"π§ Tool: {func_name}")
print(f"π Purpose: {tool['function']['description'][:100]}...")
π LLM Zoomcamp Explanation: These function definitions are like instruction manuals for ChatGPT. They tell the AI:
- π― What each tool does (description)
- π What information it needs (parameters)
- β οΈ What's required vs optional (required fields)
- π¨ How to use them effectively (examples and guidelines)
It's like giving ChatGPT a detailed handbook for using your tools! π
π Implementing the Missing Function (LLM Zoomcamp Completeness)
We defined an add_faq_entry
function but haven't implemented it yet. Let's fix that! π¨
def add_faq_entry(question: str, answer: str, section: str) -> Dict[str, Any]:
"""
π Add a new entry to our LLM Zoomcamp FAQ database.
This function simulates adding to a database. In a real system,
this would save to a database or file!
Args:
question (str): The question to add
answer (str): The detailed answer
section (str): Which section this belongs to
Returns:
Dict: Success/failure status with details
"""
print(f"π Adding new FAQ entry to section: {section}")
print(f"β Question: {question}")
print(f"β
Answer: {answer[:100]}{'...' if len(answer) > 100 else ''}")
# π Create the new entry
new_entry = {
"question": question,
"text": answer,
"section": section,
"course": "data-engineering-zoomcamp",
"id": f"new_entry_{len(documents) + 1}" # Simple ID generation
}
try:
# π In a real system, you'd save to a database here
# For our demo, we'll just add to our in-memory list
documents.append(new_entry)
# π Update our search index with the new entry
if index:
# Note: AppendableIndex would need an .add() method for this to work
# For demo purposes, we'll just acknowledge the addition
print("ποΈ Entry would be added to search index")
print("β
Successfully added new FAQ entry!")
return {
"status": "success",
"message": f"Added FAQ entry to section '{section}'",
"entry_id": new_entry["id"]
}
except Exception as e:
print(f"β Error adding FAQ entry: {e}")
return {
"status": "error",
"message": f"Failed to add entry: {str(e)}"
}
print("π FAQ entry function implemented!")
print("π Our LLM Zoomcamp system can now learn and grow!")
π LLM Zoomcamp Explanation: This function lets our system learn! When students ask new questions, we can save both the question and answer for future students. It's like building a knowledge base that gets smarter over time! π§ π
π€ Professional Function Calling Implementation (LLM Zoomcamp Magic)
Now for the main event! Let's build our professional function calling system! π
def execute_function_call(function_name: str, function_args: Dict[str, Any]) -> Any:
"""
π§ Execute a function call safely with proper error handling.
This is like having a safe way to use any tool in your toolbox!
Args:
function_name (str): Name of the function to call
function_args (Dict): Arguments to pass to the function
Returns:
Any: Result of the function call or error message
"""
print(f"π§ Executing function: {function_name}")
print(f"π With arguments: {function_args}")
try:
# π― Route to the correct function
if function_name == "search_faq":
return search_faq(**function_args)
elif function_name == "add_faq_entry":
return add_faq_entry(**function_args)
else:
return {"error": f"Unknown function: {function_name}"}
except Exception as e:
print(f"β Function execution error: {e}")
return {"error": f"Function failed: {str(e)}"}
def llm_zoomcamp_agent_v2(user_message: str, max_iterations: int = 3) -> str:
"""
π Advanced LLM Zoomcamp agent with OpenAI Function Calling!
This agent can use tools, think iteratively, and provide comprehensive answers.
It's like having a super-smart teaching assistant! π
Args:
user_message (str): The student's question or request
max_iterations (int): Maximum number of tool uses (prevents infinite loops)
Returns:
str: Complete, helpful response
"""
print(f"π LLM Zoomcamp Agent V2 starting!")
print(f"β Student question: {user_message}")
print("="*60)
# π System prompt for our professional agent
system_prompt = """
π You are an expert teaching assistant for the LLM Zoomcamp course!
Your mission is to help students succeed by:
- π Providing accurate information about the course
- π Searching the FAQ when needed
- π Adding new FAQ entries for great questions
- π§ Using your knowledge when appropriate
- π Being encouraging and supportive
You have access to powerful tools:
1. π search_faq: Search course FAQ database
2. π add_faq_entry: Add new entries to help future students
Always cite your sources and be specific in your answers!
""".strip()
# π¬ Start the conversation
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
iteration = 0
while iteration < max_iterations:
iteration += 1
print(f"π Iteration {iteration}/{max_iterations}")
try:
# π€ Call OpenAI with function calling enabled
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=llm_zoomcamp_tools,
tool_choice="auto" # Let ChatGPT decide when to use tools
)
# π₯ Get the response
assistant_message = response.choices[0].message
messages.append(assistant_message)
# π― Check if ChatGPT wants to use a tool
if assistant_message.tool_calls:
print(f"π§ ChatGPT wants to use {len(assistant_message.tool_calls)} tool(s)")
# π Execute each tool call
for tool_call in assistant_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"β‘ Using tool: {function_name}")
# π Execute the function
function_result = execute_function_call(function_name, function_args)
# π Prepare result for ChatGPT
if function_name == "search_faq":
# Format search results nicely
if function_result:
context = build_context_from_results(function_result)
result_text = f"Found {len(function_result)} relevant FAQ entries:\n{context}"
else:
result_text = "No relevant FAQ entries found."
else:
# For other functions, convert result to string
result_text = json.dumps(function_result, indent=2)
# π€ Send result back to ChatGPT
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result_text
})
print("β
All tools executed, continuing conversation...")
# Continue loop to get ChatGPT's response with the tool results
else:
# π ChatGPT provided a final answer without needing tools
print("π¬ ChatGPT provided final answer")
return assistant_message.content
except Exception as e:
print(f"β Error in agent iteration: {e}")
return f"Sorry, I encountered an error: {str(e)}"
# π If we hit max iterations, return the last message
print(f"β οΈ Reached maximum iterations ({max_iterations})")
return messages[-1].content if messages else "No response generated."
# π§ͺ Let's test our advanced LLM Zoomcamp agent!
print("\nπ§ͺ Testing Advanced LLM Zoomcamp Agent!")
print("="*70)
# Test with a course-specific question
test_question = "How do I set up the development environment for the LLM Zoomcamp?"
print(f"π Testing question: {test_question}")
print("-"*50)
response = llm_zoomcamp_agent_v2(test_question)
print(f"\nπ― Final Response:")
print(response)
π LLM Zoomcamp Explanation: This is our masterpiece! Our advanced agent can:
- π§ Think: Analyze what the student needs
- π Search: Look through FAQ database when needed
- π Learn: Add new entries for future students
- π Iterate: Use multiple tools in sequence if needed
- π¬ Respond: Provide comprehensive, helpful answers
It's like having a teaching assistant who never gets tired and keeps getting smarter! πβ¨
π Implementing Iterative Search Strategy (LLM Zoomcamp Advanced Patterns)
Sometimes one search isn't enough! Let's add iterative search capabilities that can explore topics more deeply. π΅οΈββοΈ
def iterative_search_agent(user_question: str, max_searches: int = 3) -> str:
"""
π Advanced agent that can perform multiple searches to thoroughly explore a topic.
This agent is like a detective who keeps digging until they find all the clues!
Args:
user_question (str): The student's complex question
max_searches (int): Maximum number of searches to perform
Returns:
str: Comprehensive answer based on multiple searches
"""
print(f"π Starting iterative search for: {user_question}")
print(f"π― Will perform up to {max_searches} searches")
print("="*60)
# π Keep track of all information gathered
all_search_results = []
search_queries = []
# π§ Let ChatGPT plan the search strategy
planning_prompt = f"""
π You're helping a LLM Zoomcamp student with this question: "{user_question}"
To answer this thoroughly, what 2-3 specific searches should we perform?
Provide your search strategy as a JSON list of search queries.
Example format:
["Docker installation setup", "environment configuration", "troubleshooting Docker"]
Focus on different aspects of the question to gather comprehensive information.
""".strip()
try:
# π€ Get search strategy from ChatGPT
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": planning_prompt}]
)
# π Parse the search queries
search_plan = response.choices[0].message.content
print(f"π§ ChatGPT's search strategy: {search_plan}")
# Extract search queries (simple parsing for demo)
import re
queries = re.findall(r'"([^"]+)"', search_plan)
search_queries = queries[:max_searches] # Limit to max_searches
if not search_queries:
# Fallback: use the original question
search_queries = [user_question]
print(f"π Will perform {len(search_queries)} searches:")
for i, query in enumerate(search_queries, 1):
print(f" {i}. {query}")
except Exception as e:
print(f"β οΈ Error in search planning: {e}")
search_queries = [user_question] # Fallback
# π Perform each search
for i, query in enumerate(search_queries, 1):
print(f"\nπ Search {i}/{len(search_queries)}: {query}")
try:
results = search_faq(query)
if results:
all_search_results.extend(results)
print(f"β
Found {len(results)} results")
else:
print("π No results for this query")
except Exception as e:
print(f"β Error in search {i}: {e}")
# ποΈ Remove duplicates and organize results
unique_results = []
seen_ids = set()
for result in all_search_results:
result_id = result.get('id') or result.get('question', '')
if result_id not in seen_ids:
unique_results.append(result)
seen_ids.add(result_id)
print(f"\nπ Collected {len(unique_results)} unique results from all searches")
# π Build comprehensive context
if unique_results:
comprehensive_context = build_context_from_results(unique_results)
# π Create final answer prompt
final_prompt = f"""
π You're a LLM Zoomcamp teaching assistant. A student asked: "{user_question}"
You performed multiple searches and gathered this comprehensive information:
{comprehensive_context}
Based on ALL the information above, provide a detailed, helpful answer to the student's question.
Structure your answer clearly and reference specific information from the search results.
Be encouraging and supportive! π
""".strip()
# π€ Get final comprehensive answer
final_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": final_prompt}]
)
answer = final_response.choices[0].message.content
# π Add search summary
search_summary = f"\n\nπ Search Summary: Performed {len(search_queries)} targeted searches and found {len(unique_results)} relevant FAQ entries."
return answer + search_summary
else:
return f"Sorry, I couldn't find specific information about '{user_question}' in our FAQ database. You might want to ask in the course forum or check the course materials directly! π"
# π§ͺ Test iterative search with a complex question
print("\nπ§ͺ Testing LLM Zoomcamp Iterative Search Agent!")
print("="*70)
complex_question = "I'm having trouble with my development environment. Docker isn't working and I can't connect to the database."
print(f"π€ Complex question: {complex_question}")
print("-"*50)
iterative_response = iterative_search_agent(complex_question)
print(f"\nπ― Comprehensive Response:")
print(iterative_response)
π LLM Zoomcamp Explanation: Iterative search is like being a research detective! π΅οΈββοΈ Instead of just one search, our agent:
- π§ Plans: Breaks complex questions into smaller, specific searches
- π Searches: Performs multiple targeted searches
- π Combines: Merges all results into comprehensive knowledge
- π Synthesizes: Creates a complete answer from all findings
It's perfect for complex questions that touch on multiple topics!
π¬ Building Conversational Agents (LLM Zoomcamp Chat Experience)
Let's create a conversational agent that remembers context and can chat naturally with students! π¬
π§ Implementing Conversation Memory (LLM Zoomcamp State Management)
class LLMZoomcampChatAgent:
"""
π¬ A conversational agent for LLM Zoomcamp with memory and personality!
This agent remembers your conversation and provides consistent, helpful responses.
Think of it as your personal course tutor who knows your learning journey! π
"""
def __init__(self):
"""π Initialize the LLM Zoomcamp chat agent."""
# π Conversation memory
self.conversation_history = []
# π Student context tracking
self.student_context = {
"topics_discussed": [],
"problems_solved": [],
"current_focus": None,
"learning_stage": "beginner" # beginner, intermediate, advanced
}
# π Agent personality and system prompt
self.system_prompt = """
π You are a friendly, knowledgeable teaching assistant for the LLM Zoomcamp!
Your personality:
- π Encouraging and supportive
- π― Focused on helping students succeed
- π Expert in course content and technical details
- π‘ Great at explaining complex concepts simply
- π Remembers conversation context
Your capabilities:
- π Search course FAQ database
- π Add new FAQ entries
- π¬ Maintain conversation context
- π¨ Adapt explanations to student level
Always:
- Use emojis to make conversations engaging!
- Reference previous parts of the conversation when relevant
- Ask clarifying questions if needed
- Provide practical, actionable advice
- Celebrate student progress! π
""".strip()
# π Start conversation
self.conversation_history.append({
"role": "system",
"content": self.system_prompt
})
print("π¬ LLM Zoomcamp Chat Agent initialized!")
print("π Ready to help you learn and succeed!")
def add_student_context(self, topic: str, context_type: str = "discussed"):
"""
π Track student learning context.
Args:
topic (str): What topic was covered
context_type (str): Type of interaction (discussed, solved, focused)
"""
if context_type == "discussed" and topic not in self.student_context["topics_discussed"]:
self.student_context["topics_discussed"].append(topic)
elif context_type == "solved":
self.student_context["problems_solved"].append(topic)
elif context_type == "focus":
self.student_context["current_focus"] = topic
def get_context_summary(self) -> str:
"""
π Generate a summary of the student's learning context.
Returns:
str: Formatted context summary
"""
context = self.student_context
summary_parts = []
if context["topics_discussed"]:
topics = ", ".join(context["topics_discussed"][-3:]) # Last 3 topics
summary_parts.append(f"π Recently discussed: {topics}")
if context["problems_solved"]:
problems = ", ".join(context["problems_solved"][-2:]) # Last 2 problems
summary_parts.append(f"β
Recently solved: {problems}")
if context["current_focus"]:
summary_parts.append(f"π― Current focus: {context['current_focus']}")
summary_parts.append(f"π Learning stage: {context['learning_stage']}")
return "\n".join(summary_parts) if summary_parts else "π New conversation"
def chat(self, user_message: str) -> str:
"""
π¬ Have a natural conversation with memory and context.
Args:
user_message (str): Student's message
Returns:
str: Agent's helpful response
"""
print(f"\n㪠Student: {user_message}")
print("π€ Thinking...")
# π Add user message to conversation
self.conversation_history.append({
"role": "user",
"content": user_message
})
# π Add context summary for the agent's awareness
context_summary = self.get_context_summary()
enhanced_message = f"""
Student Context: {context_summary}
Current Message: {user_message}
""".strip()
try:
# π€ Get response with function calling capability
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=self.conversation_history + [
{"role": "system", "content": f"Context: {context_summary}"}
],
tools=llm_zoomcamp_tools,
tool_choice="auto",
temperature=0.7 # Slightly more creative responses
)
assistant_message = response.choices[0].message
# π§ Handle tool calls if needed
if assistant_message.tool_calls:
print("π§ Using tools to help answer...")
# Add assistant message with tool calls
self.conversation_history.append(assistant_message)
# Execute tool calls
for tool_call in assistant_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"β‘ Using: {function_name}")
# Execute function
result = execute_function_call(function_name, function_args)
# Format result
if function_name == "search_faq" and result:
result_text = build_context_from_results(result)
else:
result_text = json.dumps(result, indent=2)
# Add tool result to conversation
self.conversation_history.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result_text
})
# Get final response with tool results
final_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=self.conversation_history,
temperature=0.7
)
final_message = final_response.choices[0].message.content
else:
final_message = assistant_message.content
# π Add assistant response to conversation
self.conversation_history.append({
"role": "assistant",
"content": final_message
})
# π§ Update student context based on conversation
self._update_context_from_message(user_message, final_message)
print(f"π Agent: {final_message}")
return final_message
except Exception as e:
error_msg = f"π
Sorry, I encountered an error: {str(e)}. Let's try rephrasing your question!"
print(f"β Error: {e}")
return error_msg
def _update_context_from_message(self, user_msg: str, agent_msg: str):
"""
π§ Update student context based on conversation content.
Args:
user_msg (str): What the student said
agent_msg (str): How the agent responded
"""
# π― Simple keyword-based context tracking
technical_topics = ["docker", "python", "environment", "setup", "installation", "database"]
course_topics = ["assignment", "homework", "deadline", "enrollment", "requirements"]
for topic in technical_topics + course_topics:
if topic.lower() in user_msg.lower():
self.add_student_context(topic, "discussed")
# π Detect if a problem was solved
if any(word in agent_msg.lower() for word in ["here's how", "solution", "try this", "steps:"]):
for topic in technical_topics:
if topic in user_msg.lower():
self.add_student_context(f"{topic} problem", "solved")
def get_conversation_summary(self) -> str:
"""
π Get a summary of the entire conversation.
Returns:
str: Conversation summary
"""
user_messages = [msg["content"] for msg in self.conversation_history if msg["role"] == "user"]
total_messages = len(user_messages)
summary = f"""
π LLM Zoomcamp Conversation Summary
π¬ Total messages: {total_messages}
π Topics discussed: {len(self.student_context['topics_discussed'])}
β
Problems solved: {len(self.student_context['problems_solved'])}
π Learning stage: {self.student_context['learning_stage']}
π£οΈ Recent topics: {', '.join(self.student_context['topics_discussed'][-5:]) if self.student_context['topics_discussed'] else 'None yet'}
""".strip()
return summary
# π§ͺ Let's test our conversational agent!
print("\nπ§ͺ Testing LLM Zoomcamp Conversational Agent!")
print("="*70)
# Create agent instance
chat_agent = LLMZoomcampChatAgent()
# Have a conversation!
print("\n㪠Starting conversation...")
response1 = chat_agent.chat("Hi! I'm new to the LLM Zoomcamp. Can you help me get started?")
print("\n" + "-"*40)
response2 = chat_agent.chat("I'm having trouble installing Docker. What should I do?")
print("\n" + "-"*40)
response3 = chat_agent.chat("Thanks! That was helpful. What's the next step after Docker?")
print("\n" + "-"*40)
print("π Conversation Summary:")
print(chat_agent.get_conversation_summary())
π LLM Zoomcamp Explanation: Our conversational agent is like having a smart tutor who:
- π Remembers: Keeps track of your entire conversation
- π Learns: Understands what topics you've discussed
- π― Focuses: Knows what you're currently working on
- π§ Adapts: Uses tools when needed to help you
- π Personalizes: Adjusts responses based on your learning stage
It's like having a study buddy who never forgets anything you've talked about! π€
π¨ Advanced UI Patterns (LLM Zoomcamp Visual Experience)
Let's create beautiful, interactive interfaces for our agentic systems! Perfect for Jupyter notebooks! π¨
π± Creating Rich Jupyter Interfaces (LLM Zoomcamp Interactive Learning)
from IPython.display import display, HTML, Markdown
import time
from datetime import datetime
class LLMZoomcampUI:
"""
π¨ Beautiful UI components for LLM Zoomcamp agents in Jupyter!
This class creates gorgeous, interactive interfaces that make
learning with AI agents a delightful experience! β¨
"""
def __init__(self, theme: str = "zoomcamp"):
"""
π¨ Initialize the UI system.
Args:
theme (str): UI theme ("zoomcamp", "dark", "light")
"""
self.theme = theme
self.conversation_id = f"conv_{int(time.time())}"
# π Load theme styles
self._load_theme_styles()
print("π¨ LLM Zoomcamp UI system initialized!")
print(f"β¨ Using theme: {theme}")
def _load_theme_styles(self):
"""π Load CSS styles for the current theme."""
if self.theme == "zoomcamp":
self.styles = """
<style>
.llm-zoomcamp-container {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
max-width: 900px;
margin: 20px auto;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
border-radius: 15px;
padding: 25px;
box-shadow: 0 10px 30px rgba(0,0,0,0.2);
color: white;
}
.message-bubble {
margin: 15px 0;
padding: 15px 20px;
border-radius: 20px;
box-shadow: 0 3px 10px rgba(0,0,0,0.1);
animation: slideIn 0.3s ease-out;
}
.user-message {
background: rgba(255,255,255,0.9);
color: #333;
margin-left: 50px;
border-bottom-right-radius: 5px;
}
.agent-message {
background: rgba(255,255,255,0.1);
color: white;
margin-right: 50px;
border-bottom-left-radius: 5px;
border: 1px solid rgba(255,255,255,0.2);
}
.system-message {
background: rgba(255,193,7,0.2);
color: #fff3cd;
text-align: center;
margin: 10px 20px;
font-size: 0.9em;
border: 1px solid rgba(255,193,7,0.3);
}
.tool-execution {
background: rgba(40,167,69,0.2);
color: #d4edda;
margin: 10px 30px;
font-family: 'Courier New', monospace;
font-size: 0.85em;
border-left: 4px solid #28a745;
}
.header {
text-align: center;
margin-bottom: 30px;
padding-bottom: 20px;
border-bottom: 2px solid rgba(255,255,255,0.3);
}
.header h2 {
margin: 0;
font-size: 2.2em;
text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
}
.timestamp {
font-size: 0.8em;
opacity: 0.7;
margin-top: 5px;
}
.loading {
display: inline-block;
animation: spin 1s linear infinite;
}
@keyframes slideIn {
from { opacity: 0; transform: translateY(10px); }
to { opacity: 1; transform: translateY(0); }
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
.emoji-large {
font-size: 1.5em;
margin-right: 10px;
}
</style>
"""
# Display styles
display(HTML(self.styles))
def show_header(self, title: "str = \"π LLM Zoomcamp AI Assistant\"):"
"""
π Display a beautiful header for the interface.
Args:
title (str): Title to display
"""
header_html = f"""
<div class="llm-zoomcamp-container">
<div class="header">
<h2>{title}</h2>
<p>π¬ Your intelligent course companion for learning and problem-solving!</p>
<div class="timestamp">π Session started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</div>
</div>
</div>
"""
display(HTML(header_html))
def show_user_message(self, message: str):
"""
π€ Display a user message bubble.
Args:
message (str): User's message
"""
bubble_html = f"""
<div class="llm-zoomcamp-container">
<div class="message-bubble user-message">
<span class="emoji-large">π€</span>
<strong>You:</strong> {message}
<div class="timestamp">π
{datetime.now().strftime('%H:%M:%S')}</div>
</div>
</div>
"""
display(HTML(bubble_html))
def show_agent_message(self, message: str, thinking_time: float = 0):
"""
π€ Display an agent response bubble.
Args:
message (str): Agent's response
thinking_time (float): How long the agent "thought"
"""
thinking_indicator = f" (π Thought for {thinking_time:.1f}s)" if thinking_time > 0 else ""
bubble_html = f"""
<div class="llm-zoomcamp-container">
<div class="message-bubble agent-message">
<span class="emoji-large">π</span>
<strong>LLM Zoomcamp Assistant:</strong><br>
{message.replace('\n', '<br>')}
<div class="timestamp">π€ {datetime.now().strftime('%H:%M:%S')}{thinking_indicator}</div>
</div>
</div>
"""
display(HTML(bubble_html))
def show_tool_execution(self, tool_name: str, tool_args: dict, result_summary: str):
"""
π§ Display tool execution information.
Args:
tool_name (str): Name of the tool used
tool_args (dict): Arguments passed to the tool
result_summary (str): Summary of the result
"""
args_str = ", ".join([f"{k}='{v}'" for k, v in tool_args.items()])
tool_html = f"""
<div class="llm-zoomcamp-container">
<div class="tool-execution">
<span class="emoji-large">π§</span>
<strong>Tool Used:</strong> {tool_name}({args_str})<br>
<strong>Result:</strong> {result_summary}
<div class="timestamp">β‘ {datetime.now().strftime('%H:%M:%S')}</div>
</div>
</div>
"""
display(HTML(tool_html))
def show_system_message(self, message: str):
"""
βοΈ Display a system/status message.
Args:
message (str): System message
"""
system_html = f"""
<div class="llm-zoomcamp-container">
<div class="message-bubble system-message">
<span class="emoji-large">βοΈ</span>
{message}
<div class="timestamp">π {datetime.now().strftime('%H:%M:%S')}</div>
</div>
</div>
"""
display(HTML(system_html))
def show_loading(self, message: str = "π€ Thinking..."):
"""
β³ Display a loading message.
Args:
message (str): Loading message
"""
loading_html = f"""
<div class="llm-zoomcamp-container">
<div class="message-bubble system-message">
<span class="loading emoji-large">π</span>
{message}
</div>
</div>
"""
display(HTML(loading_html))
# π¨ Enhanced Chat Agent with Beautiful UI
class LLMZoomcampVisualAgent(LLMZoomcampChatAgent):
"""
π¨ LLM Zoomcamp chat agent with beautiful visual interface!
Combines all the intelligence of our chat agent with gorgeous UI! β¨
"""
def __init__(self):
"""π¨ Initialize the visual agent."""
super().__init__()
self.ui = LLMZoomcampUI()
# π Show beautiful header
self.ui.show_header()
self.ui.show_system_message("π LLM Zoomcamp Visual Agent ready to help!")
def visual_chat(self, user_message: str) -> str:
"""
π¬ Chat with beautiful visual interface.
Args:
user_message (str): User's message
Returns:
str: Agent's response
"""
# π€ Show user message
self.ui.show_user_message(user_message)
# β³ Show loading
self.ui.show_loading("π€ Analyzing your question...")
time.sleep(1) # Brief pause for effect
# π Track thinking time
start_time = time.time()
# π Add to conversation history
self.conversation_history.append({
"role": "user",
"content": user_message
})
try:
# π€ Get response with tools
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=self.conversation_history + [
{"role": "system", "content": f"Context: {self.get_context_summary()}"}
],
tools=llm_zoomcamp_tools,
tool_choice="auto",
temperature=0.7
)
assistant_message = response.choices[0].message
# π§ Handle tool usage
if assistant_message.tool_calls:
self.ui.show_system_message("π§ Using tools to find the best answer...")
# Add assistant message
self.conversation_history.append(assistant_message)
# Execute tools
for tool_call in assistant_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# Show tool execution
result = execute_function_call(function_name, function_args)
if function_name == "search_faq" and result:
result_summary = f"Found {len(result)} relevant FAQ entries"
result_text = build_context_from_results(result)
else:
result_summary = "Completed successfully"
result_text = json.dumps(result, indent=2)
self.ui.show_tool_execution(function_name, function_args, result_summary)
# Add to conversation
self.conversation_history.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result_text
})
# Get final response
final_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=self.conversation_history,
temperature=0.7
)
final_message = final_response.choices[0].message.content
else:
final_message = assistant_message.content
# π Add to conversation
self.conversation_history.append({
"role": "assistant",
"content": final_message
})
# π Calculate thinking time
thinking_time = time.time() - start_time
# π€ Show agent response
self.ui.show_agent_message(final_message, thinking_time)
# π§ Update context
self._update_context_from_message(user_message, final_message)
return final_message
except Exception as e:
error_msg = f"π
Sorry, I encountered an error: {str(e)}"
self.ui.show_agent_message(error_msg)
return error_msg
# π§ͺ Demo the beautiful visual interface!
print("π§ͺ Creating Beautiful LLM Zoomcamp Visual Demo!")
print("="*60)
# Note: This would work beautifully in a Jupyter notebook!
print("π Visual demo code ready!")
print("π¨ Run this in Jupyter for the full visual experience!")
# Example usage:
demo_code = '''
# π¨ In a Jupyter notebook, run this for beautiful visual chat:
visual_agent = LLMZoomcampVisualAgent()
# π¬ Start chatting with beautiful UI!
visual_agent.visual_chat("Hi! I need help setting up my development environment.")
visual_agent.visual_chat("What are the Docker requirements?")
visual_agent.visual_chat("Thanks! How do I troubleshoot if Docker fails to start?")
'''
print("π Jupyter Notebook Demo Code:")
print(demo_code)
π LLM Zoomcamp Explanation: Our visual interface makes learning with AI agents a joy! π¨ It features:
- π Beautiful Themes: Professional, engaging visual design
- π¬ Chat Bubbles: Clear separation between user and agent messages
- π§ Tool Visualization: See exactly what tools are being used
- β³ Loading States: Smooth, engaging interaction flow
- π Status Messages: Clear system feedback and progress updates
Perfect for creating engaging educational experiences in Jupyter notebooks! β¨
π Key Concepts Mastered in Part 2 (LLM Zoomcamp Advanced Skills)
Congratulations! You've now mastered advanced agentic AI development! π Here's what you've learned:
π Professional Function Calling (LLM Zoomcamp Industry Standard)
- π‘οΈ Type Safety: Robust parameter validation and error handling
- π Documentation: Self-describing tools with clear specifications
- β‘ Performance: Optimized structured interactions with OpenAI
- π§ Extensibility: Easy to add new tools and capabilities
π Iterative Search Strategies (LLM Zoomcamp Research Mastery)
- π§ Strategic Planning: AI-driven search strategy development
- π Multi-Query Search: Comprehensive topic exploration
- π Result Synthesis: Combining multiple searches into coherent answers
- π― Context Management: Handling complex, multi-faceted questions
π¬ Conversational Intelligence (LLM Zoomcamp Communication Excellence)
- π Memory Management: Persistent conversation context
- π Learning Tracking: Student progress and topic mastery
- π― Adaptive Responses: Context-aware, personalized interactions
- π Personality: Consistent, engaging agent behavior
π¨ Professional UI/UX (LLM Zoomcamp Visual Excellence)
- πΌοΈ Interactive Interfaces: Beautiful, engaging user experiences
- π± Responsive Design: Jupyter-optimized visual components
- β‘ Real-time Feedback: Loading states and progress indicators
- π Theme Systems: Customizable visual experiences
π Advanced Patterns You've Implemented
- π§ Function Orchestration: Chaining multiple tools for complex tasks
- π§ Context-Aware Decision Making: Using conversation history for better responses
- π State Management: Tracking student progress and learning context
- π¨ Visual Feedback Systems: Real-time UI updates and status indicators
- β‘ Error Recovery: Graceful handling of failures and edge cases
π― Real-World Applications (LLM Zoomcamp Career Preparation)
The patterns you've learned are used in production systems like:
- π’ Customer Support Chatbots: Multi-tool agents that can search knowledge bases
- π Educational Platforms: Personalized tutoring with progress tracking
- πΌ Business Intelligence: Agents that can query databases and generate reports
- π Research Assistants: Systems that can search multiple sources and synthesize information
π LLM Zoomcamp Achievement: You now have the skills to build professional-grade agentic AI systems! π
π Resources for Part 2
- π LLM Zoomcamp Course: Main Repository
- π§ OpenAI Function Calling: Official Documentation
- π¨ Jupyter Widgets: Interactive UI Documentation
- π¬ Conversation Design: Best Practices Guide
π LLM Zoomcamp Tutorial Series - Part 2 Complete! π
Ready for the ultimate challenge? Continue to Part 3 for enterprise-level patterns and production deployment! π
#LLMZoomcamp
Top comments (0)