DEV Community

Abdelrahman Adnan
Abdelrahman Adnan

Posted on

πŸ”§ Part 2: Advanced - OpenAI Function Calling & Iterative Search

πŸŽ“ LLM Zoomcamp Tutorial Series - Professional Function Calling & Conversational Agents

Welcome to Part 2 of our LLM Zoomcamp tutorial series! πŸŽ“ Now that you understand the fundamentals from Part 1, we're ready to build professional-grade agentic systems using OpenAI Function Calling. This is where your assistant becomes truly intelligent! 🧠✨


πŸš€ OpenAI Function Calling: The Professional Standard (LLM Zoomcamp Pro Level)

In Part 1, we used JSON parsing to handle agent decisions. While that works, OpenAI Function Calling is the professional standard used in production systems! πŸ† Think of it as upgrading from handwritten forms to a professional database system.

🎯 Why Function Calling is Superior (LLM Zoomcamp Best Practices)

  • πŸ›‘οΈ Type Safety: Automatic validation of inputs and outputs
  • πŸ“ Documentation: Self-describing tools with clear parameters
  • ⚑ Performance: Optimized for structured interactions
  • πŸ—οΈ Scalability: Easy to add new tools and capabilities
  • 🎭 Reliability: Consistent format reduces parsing errors

Let's build it step by step! πŸ”¨

πŸ”§ Setting Up Advanced Function Tools (LLM Zoomcamp Implementation)

# πŸ“š Enhanced imports for professional function calling
import json
import requests
from openai import OpenAI
from minsearch import AppendableIndex
from typing import Dict, List, Any, Optional

# πŸ”‘ Initialize our OpenAI client (your gateway to ChatGPT)
client = OpenAI()

print("πŸ”§ Setting up LLM Zoomcamp Function Calling system!")
print("✨ This is the professional way to build AI agents!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: We're importing typing to make our code more professional. It's like adding labels to all your tools so everyone knows what they do! 🏷️

πŸ“Š Enhanced Data Setup with Error Handling (LLM Zoomcamp Robustness)

def setup_llm_zoomcamp_data():
    """
    πŸ“Š Professional data setup with error handling.

    This function downloads and prepares our LLM Zoomcamp FAQ data
    with proper error handling - just like real production systems!

    Returns:
        tuple: (documents list, search index) or (None, None) if failed
    """

    try:
        print("🌐 Downloading LLM Zoomcamp FAQ data...")

        # πŸ“₯ Download with timeout and error handling
        docs_url = 'https://github.com/alexeygrigorev/llm-rag-workshop/raw/main/notebooks/documents.json'
        response = requests.get(docs_url, timeout=30)  # 30 second timeout
        response.raise_for_status()  # Raise exception for bad status codes

        documents_raw = response.json()
        print(f"βœ… Successfully downloaded {len(documents_raw)} courses!")

        # πŸ”„ Process the data (same as Part 1, but with better logging)
        documents = []
        for course in documents_raw:
            course_name = course['course']
            for doc in course['documents']:
                doc['course'] = course_name
                documents.append(doc)

        print(f"πŸ“ Processed {len(documents)} FAQ documents total")

        # πŸ—‚οΈ Create search index with error handling
        index = AppendableIndex(
            text_fields=["question", "text", "section"],
            keyword_fields=["course"]
        )
        index.fit(documents)

        print("🎯 Search index created successfully!")
        return documents, index

    except requests.RequestException as e:
        print(f"❌ Network error downloading data: {e}")
        return None, None
    except json.JSONDecodeError as e:
        print(f"❌ Error parsing JSON data: {e}")
        return None, None
    except Exception as e:
        print(f"❌ Unexpected error: {e}")
        return None, None

# πŸš€ Initialize our LLM Zoomcamp system
documents, index = setup_llm_zoomcamp_data()

if documents and index:
    print("πŸŽ‰ LLM Zoomcamp system ready!")
else:
    print("😱 System initialization failed!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Professional systems always handle errors gracefully! Instead of crashing when something goes wrong, we catch problems and explain what happened. It's like having airbags in your car! πŸš—πŸ’¨

πŸ› οΈ Building Professional Function Tools (LLM Zoomcamp Function Library)

Now let's create our function tools using OpenAI's professional format! These are like giving ChatGPT superpowers! πŸ¦Έβ€β™€οΈ

# πŸ” Function 1: Professional FAQ Search
def search_faq(query: str, course: str = "data-engineering-zoomcamp") -> List[Dict[str, Any]]:
    """
    πŸ” Search the LLM Zoomcamp FAQ database for relevant information.

    This is like having a super-smart librarian who can instantly find
    the most relevant course information for any question!

    Args:
        query (str): What to search for (e.g., "Docker installation")
        course (str): Which course to search in (default: data-engineering-zoomcamp)

    Returns:
        List[Dict]: List of relevant FAQ entries with scores
    """

    if not index:
        print("❌ Search index not available!")
        return []

    print(f"πŸ” Searching for: '{query}' in course: {course}")

    # 🎯 Professional search with boosting
    boost = {'question': 3.0, 'section': 0.5}

    try:
        results = index.search(
            query=query,
            filter_dict={'course': course},
            boost_dict=boost,
            num_results=5,
            output_ids=True
        )

        print(f"βœ… Found {len(results)} relevant results")
        return results

    except Exception as e:
        print(f"❌ Search error: {e}")
        return []

# πŸ“ Function 2: Professional Context Builder  
def build_context_from_results(results: List[Dict[str, Any]]) -> str:
    """
    πŸ“ Build a professional context string from search results.

    Think of this as organizing your research notes in the perfect format
    for ChatGPT to understand and use effectively!

    Args:
        results (List[Dict]): Search results from search_faq function

    Returns:
        str: Beautifully formatted context string
    """

    if not results:
        return "No relevant information found in the FAQ database."

    context_parts = []

    for i, doc in enumerate(results, 1):
        context_part = f"""
Document {i}:
Section: {doc.get('section', 'Unknown')}
Question: {doc.get('question', 'No question')}
Answer: {doc.get('text', 'No answer available')}
Relevance Score: {doc.get('score', 'N/A')}
""".strip()
        context_parts.append(context_part)

    final_context = "\n\n" + "\n\n".join(context_parts)
    print(f"πŸ“ Built context from {len(results)} documents")

    return final_context

# πŸ§ͺ Let's test our professional functions!
print("\nπŸ§ͺ Testing LLM Zoomcamp Professional Functions:")
print("="*60)

# Test search function
test_results = search_faq("How do I install Docker?")
if test_results:
    print(f"🎯 First result: {test_results[0].get('question', 'No question')}")

    # Test context builder
    test_context = build_context_from_results(test_results)
    print(f"πŸ“„ Context length: {len(test_context)} characters")
    print(f"πŸ“‹ Context preview: {test_context[:200]}...")
else:
    print("πŸ˜” No results found for test query")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Our professional functions are like upgraded tools in a workshop. They have:

  • 🏷️ Clear documentation (docstrings explaining what they do)
  • πŸ›‘οΈ Error handling (they don't break when things go wrong)
  • πŸ“Š Type hints (Python knows what kind of data to expect)
  • βœ… Validation (they check if everything is working properly)

🎭 OpenAI Function Definitions (LLM Zoomcamp Professional Format)

Now comes the magic! We need to tell OpenAI about our functions in a special format. This is like writing a manual for ChatGPT! πŸ“–

# 🎭 Professional function definitions for OpenAI
llm_zoomcamp_tools = [
    {
        "type": "function",
        "function": {
            "name": "search_faq",
            "description": """
πŸ” Search the LLM Zoomcamp FAQ database for course-related information.

Use this tool when students ask questions about:
- Course enrollment and requirements
- Technical setup (Docker, Python, environment)  
- Assignment deadlines and procedures
- Course content and materials
- Troubleshooting technical issues

This tool searches through real FAQ data from data engineering courses.
""".strip(),
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query. Be specific! Good: 'Docker installation on Windows', Bad: 'Docker'"
                    },
                    "course": {
                        "type": "string", 
                        "description": "Which course to search in",
                        "enum": ["data-engineering-zoomcamp"],
                        "default": "data-engineering-zoomcamp"
                    }
                },
                "required": ["query"]
            }
        }
    },
    {
        "type": "function", 
        "function": {
            "name": "add_faq_entry",
            "description": """
πŸ“ Add a new entry to the LLM Zoomcamp FAQ database.

Use this tool when:
- A student asks a good question that isn't in the FAQ
- You want to save information for future students  
- The conversation reveals a gap in our FAQ coverage

This helps improve the course materials for everyone! πŸŽ“
""".strip(),
            "parameters": {
                "type": "object",
                "properties": {
                    "question": {
                        "type": "string",
                        "description": "The question to add (should be clear and specific)"
                    },
                    "answer": {
                        "type": "string", 
                        "description": "The detailed answer to the question"
                    },
                    "section": {
                        "type": "string",
                        "description": "Which course section this belongs to (e.g., 'setup', 'docker', 'assignments')"
                    }
                },
                "required": ["question", "answer", "section"]
            }
        }
    }
]

print("🎭 Created professional LLM Zoomcamp function definitions!")
print("✨ ChatGPT now knows about our awesome tools!")

# πŸ“‹ Let's see what we've created
for tool in llm_zoomcamp_tools:
    func_name = tool['function']['name']
    print(f"πŸ”§ Tool: {func_name}")
    print(f"πŸ“– Purpose: {tool['function']['description'][:100]}...")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: These function definitions are like instruction manuals for ChatGPT. They tell the AI:

  • 🎯 What each tool does (description)
  • πŸ“ What information it needs (parameters)
  • ⚠️ What's required vs optional (required fields)
  • 🎨 How to use them effectively (examples and guidelines)

It's like giving ChatGPT a detailed handbook for using your tools! πŸ“š

πŸ“ Implementing the Missing Function (LLM Zoomcamp Completeness)

We defined an add_faq_entry function but haven't implemented it yet. Let's fix that! πŸ”¨

def add_faq_entry(question: str, answer: str, section: str) -> Dict[str, Any]:
    """
    πŸ“ Add a new entry to our LLM Zoomcamp FAQ database.

    This function simulates adding to a database. In a real system,
    this would save to a database or file!

    Args:
        question (str): The question to add
        answer (str): The detailed answer
        section (str): Which section this belongs to

    Returns:
        Dict: Success/failure status with details
    """

    print(f"πŸ“ Adding new FAQ entry to section: {section}")
    print(f"❓ Question: {question}")
    print(f"βœ… Answer: {answer[:100]}{'...' if len(answer) > 100 else ''}")

    # πŸ†• Create the new entry
    new_entry = {
        "question": question,
        "text": answer,
        "section": section,
        "course": "data-engineering-zoomcamp",
        "id": f"new_entry_{len(documents) + 1}"  # Simple ID generation
    }

    try:
        # πŸ“š In a real system, you'd save to a database here
        # For our demo, we'll just add to our in-memory list
        documents.append(new_entry)

        # πŸ”„ Update our search index with the new entry
        if index:
            # Note: AppendableIndex would need an .add() method for this to work
            # For demo purposes, we'll just acknowledge the addition
            print("πŸ—‚οΈ Entry would be added to search index")

        print("βœ… Successfully added new FAQ entry!")

        return {
            "status": "success",
            "message": f"Added FAQ entry to section '{section}'",
            "entry_id": new_entry["id"]
        }

    except Exception as e:
        print(f"❌ Error adding FAQ entry: {e}")
        return {
            "status": "error", 
            "message": f"Failed to add entry: {str(e)}"
        }

print("πŸ“ FAQ entry function implemented!")
print("πŸŽ“ Our LLM Zoomcamp system can now learn and grow!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: This function lets our system learn! When students ask new questions, we can save both the question and answer for future students. It's like building a knowledge base that gets smarter over time! πŸ§ πŸ“š

πŸ€– Professional Function Calling Implementation (LLM Zoomcamp Magic)

Now for the main event! Let's build our professional function calling system! 🎭

def execute_function_call(function_name: str, function_args: Dict[str, Any]) -> Any:
    """
    πŸ”§ Execute a function call safely with proper error handling.

    This is like having a safe way to use any tool in your toolbox!

    Args:
        function_name (str): Name of the function to call
        function_args (Dict): Arguments to pass to the function

    Returns:
        Any: Result of the function call or error message
    """

    print(f"πŸ”§ Executing function: {function_name}")
    print(f"πŸ“‹ With arguments: {function_args}")

    try:
        # 🎯 Route to the correct function
        if function_name == "search_faq":
            return search_faq(**function_args)
        elif function_name == "add_faq_entry":
            return add_faq_entry(**function_args)
        else:
            return {"error": f"Unknown function: {function_name}"}

    except Exception as e:
        print(f"❌ Function execution error: {e}")
        return {"error": f"Function failed: {str(e)}"}

def llm_zoomcamp_agent_v2(user_message: str, max_iterations: int = 3) -> str:
    """
    πŸš€ Advanced LLM Zoomcamp agent with OpenAI Function Calling!

    This agent can use tools, think iteratively, and provide comprehensive answers.
    It's like having a super-smart teaching assistant! πŸŽ“

    Args:
        user_message (str): The student's question or request
        max_iterations (int): Maximum number of tool uses (prevents infinite loops)

    Returns:
        str: Complete, helpful response
    """

    print(f"πŸš€ LLM Zoomcamp Agent V2 starting!")
    print(f"❓ Student question: {user_message}")
    print("="*60)

    # 🎭 System prompt for our professional agent
    system_prompt = """
πŸŽ“ You are an expert teaching assistant for the LLM Zoomcamp course!

Your mission is to help students succeed by:
- πŸ“š Providing accurate information about the course
- πŸ” Searching the FAQ when needed  
- πŸ“ Adding new FAQ entries for great questions
- 🧠 Using your knowledge when appropriate
- 😊 Being encouraging and supportive

You have access to powerful tools:
1. πŸ” search_faq: Search course FAQ database
2. πŸ“ add_faq_entry: Add new entries to help future students

Always cite your sources and be specific in your answers!
""".strip()

    # πŸ’¬ Start the conversation
    messages = [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_message}
    ]

    iteration = 0

    while iteration < max_iterations:
        iteration += 1
        print(f"πŸ”„ Iteration {iteration}/{max_iterations}")

        try:
            # πŸ€– Call OpenAI with function calling enabled
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=messages,
                tools=llm_zoomcamp_tools,
                tool_choice="auto"  # Let ChatGPT decide when to use tools
            )

            # πŸ“₯ Get the response
            assistant_message = response.choices[0].message
            messages.append(assistant_message)

            # 🎯 Check if ChatGPT wants to use a tool
            if assistant_message.tool_calls:
                print(f"πŸ”§ ChatGPT wants to use {len(assistant_message.tool_calls)} tool(s)")

                # πŸ”„ Execute each tool call
                for tool_call in assistant_message.tool_calls:
                    function_name = tool_call.function.name
                    function_args = json.loads(tool_call.function.arguments)

                    print(f"⚑ Using tool: {function_name}")

                    # πŸš€ Execute the function
                    function_result = execute_function_call(function_name, function_args)

                    # πŸ“ Prepare result for ChatGPT
                    if function_name == "search_faq":
                        # Format search results nicely
                        if function_result:
                            context = build_context_from_results(function_result)
                            result_text = f"Found {len(function_result)} relevant FAQ entries:\n{context}"
                        else:
                            result_text = "No relevant FAQ entries found."
                    else:
                        # For other functions, convert result to string
                        result_text = json.dumps(function_result, indent=2)

                    # πŸ“€ Send result back to ChatGPT
                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result_text
                    })

                print("βœ… All tools executed, continuing conversation...")
                # Continue loop to get ChatGPT's response with the tool results

            else:
                # πŸŽ‰ ChatGPT provided a final answer without needing tools
                print("πŸ’¬ ChatGPT provided final answer")
                return assistant_message.content

        except Exception as e:
            print(f"❌ Error in agent iteration: {e}")
            return f"Sorry, I encountered an error: {str(e)}"

    # πŸ”„ If we hit max iterations, return the last message
    print(f"⚠️ Reached maximum iterations ({max_iterations})")
    return messages[-1].content if messages else "No response generated."

# πŸ§ͺ Let's test our advanced LLM Zoomcamp agent!
print("\nπŸ§ͺ Testing Advanced LLM Zoomcamp Agent!")
print("="*70)

# Test with a course-specific question
test_question = "How do I set up the development environment for the LLM Zoomcamp?"
print(f"πŸ“š Testing question: {test_question}")
print("-"*50)

response = llm_zoomcamp_agent_v2(test_question)
print(f"\n🎯 Final Response:")
print(response)
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: This is our masterpiece! Our advanced agent can:

  1. 🧠 Think: Analyze what the student needs
  2. πŸ” Search: Look through FAQ database when needed
  3. πŸ“ Learn: Add new entries for future students
  4. πŸ”„ Iterate: Use multiple tools in sequence if needed
  5. πŸ’¬ Respond: Provide comprehensive, helpful answers

It's like having a teaching assistant who never gets tired and keeps getting smarter! πŸŽ“βœ¨

πŸ”„ Implementing Iterative Search Strategy (LLM Zoomcamp Advanced Patterns)

Sometimes one search isn't enough! Let's add iterative search capabilities that can explore topics more deeply. πŸ•΅οΈβ€β™€οΈ

def iterative_search_agent(user_question: str, max_searches: int = 3) -> str:
    """
    πŸ”„ Advanced agent that can perform multiple searches to thoroughly explore a topic.

    This agent is like a detective who keeps digging until they find all the clues!

    Args:
        user_question (str): The student's complex question
        max_searches (int): Maximum number of searches to perform

    Returns:
        str: Comprehensive answer based on multiple searches
    """

    print(f"πŸ”„ Starting iterative search for: {user_question}")
    print(f"🎯 Will perform up to {max_searches} searches")
    print("="*60)

    # πŸ“ Keep track of all information gathered
    all_search_results = []
    search_queries = []

    # 🧠 Let ChatGPT plan the search strategy
    planning_prompt = f"""
πŸŽ“ You're helping a LLM Zoomcamp student with this question: "{user_question}"

To answer this thoroughly, what 2-3 specific searches should we perform?
Provide your search strategy as a JSON list of search queries.

Example format:
["Docker installation setup", "environment configuration", "troubleshooting Docker"]

Focus on different aspects of the question to gather comprehensive information.
""".strip()

    try:
        # πŸ€– Get search strategy from ChatGPT
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": planning_prompt}]
        )

        # πŸ“‹ Parse the search queries
        search_plan = response.choices[0].message.content
        print(f"🧠 ChatGPT's search strategy: {search_plan}")

        # Extract search queries (simple parsing for demo)
        import re
        queries = re.findall(r'"([^"]+)"', search_plan)
        search_queries = queries[:max_searches]  # Limit to max_searches

        if not search_queries:
            # Fallback: use the original question
            search_queries = [user_question]

        print(f"πŸ” Will perform {len(search_queries)} searches:")
        for i, query in enumerate(search_queries, 1):
            print(f"  {i}. {query}")

    except Exception as e:
        print(f"⚠️ Error in search planning: {e}")
        search_queries = [user_question]  # Fallback

    # πŸ”„ Perform each search
    for i, query in enumerate(search_queries, 1):
        print(f"\nπŸ” Search {i}/{len(search_queries)}: {query}")

        try:
            results = search_faq(query)
            if results:
                all_search_results.extend(results)
                print(f"βœ… Found {len(results)} results")
            else:
                print("πŸ“­ No results for this query")

        except Exception as e:
            print(f"❌ Error in search {i}: {e}")

    # πŸ—‚οΈ Remove duplicates and organize results
    unique_results = []
    seen_ids = set()

    for result in all_search_results:
        result_id = result.get('id') or result.get('question', '')
        if result_id not in seen_ids:
            unique_results.append(result)
            seen_ids.add(result_id)

    print(f"\nπŸ“Š Collected {len(unique_results)} unique results from all searches")

    # πŸ“ Build comprehensive context
    if unique_results:
        comprehensive_context = build_context_from_results(unique_results)

        # 🎭 Create final answer prompt
        final_prompt = f"""
πŸŽ“ You're a LLM Zoomcamp teaching assistant. A student asked: "{user_question}"

You performed multiple searches and gathered this comprehensive information:

{comprehensive_context}

Based on ALL the information above, provide a detailed, helpful answer to the student's question.
Structure your answer clearly and reference specific information from the search results.

Be encouraging and supportive! 😊
""".strip()

        # πŸ€– Get final comprehensive answer
        final_response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": final_prompt}]
        )

        answer = final_response.choices[0].message.content

        # πŸ“ˆ Add search summary
        search_summary = f"\n\nπŸ“Š Search Summary: Performed {len(search_queries)} targeted searches and found {len(unique_results)} relevant FAQ entries."

        return answer + search_summary

    else:
        return f"Sorry, I couldn't find specific information about '{user_question}' in our FAQ database. You might want to ask in the course forum or check the course materials directly! πŸŽ“"

# πŸ§ͺ Test iterative search with a complex question
print("\nπŸ§ͺ Testing LLM Zoomcamp Iterative Search Agent!")
print("="*70)

complex_question = "I'm having trouble with my development environment. Docker isn't working and I can't connect to the database."
print(f"πŸ€” Complex question: {complex_question}")
print("-"*50)

iterative_response = iterative_search_agent(complex_question)
print(f"\n🎯 Comprehensive Response:")
print(iterative_response)
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Iterative search is like being a research detective! πŸ•΅οΈβ€β™€οΈ Instead of just one search, our agent:

  1. 🧠 Plans: Breaks complex questions into smaller, specific searches
  2. πŸ” Searches: Performs multiple targeted searches
  3. πŸ“Š Combines: Merges all results into comprehensive knowledge
  4. πŸ“ Synthesizes: Creates a complete answer from all findings

It's perfect for complex questions that touch on multiple topics!

πŸ’¬ Building Conversational Agents (LLM Zoomcamp Chat Experience)

Let's create a conversational agent that remembers context and can chat naturally with students! πŸ’¬

🧠 Implementing Conversation Memory (LLM Zoomcamp State Management)

class LLMZoomcampChatAgent:
    """
    πŸ’¬ A conversational agent for LLM Zoomcamp with memory and personality!

    This agent remembers your conversation and provides consistent, helpful responses.
    Think of it as your personal course tutor who knows your learning journey! πŸŽ“
    """

    def __init__(self):
        """πŸš€ Initialize the LLM Zoomcamp chat agent."""

        # πŸ’­ Conversation memory
        self.conversation_history = []

        # πŸŽ“ Student context tracking
        self.student_context = {
            "topics_discussed": [],
            "problems_solved": [],
            "current_focus": None,
            "learning_stage": "beginner"  # beginner, intermediate, advanced
        }

        # 🎭 Agent personality and system prompt
        self.system_prompt = """
πŸŽ“ You are a friendly, knowledgeable teaching assistant for the LLM Zoomcamp!

Your personality:
- 😊 Encouraging and supportive
- 🎯 Focused on helping students succeed  
- πŸ“š Expert in course content and technical details
- πŸ’‘ Great at explaining complex concepts simply
- πŸ”„ Remembers conversation context

Your capabilities:
- πŸ” Search course FAQ database
- πŸ“ Add new FAQ entries
- πŸ’¬ Maintain conversation context
- 🎨 Adapt explanations to student level

Always:
- Use emojis to make conversations engaging! 
- Reference previous parts of the conversation when relevant
- Ask clarifying questions if needed
- Provide practical, actionable advice
- Celebrate student progress! πŸŽ‰
""".strip()

        # 🏁 Start conversation
        self.conversation_history.append({
            "role": "system", 
            "content": self.system_prompt
        })

        print("πŸ’¬ LLM Zoomcamp Chat Agent initialized!")
        print("πŸŽ“ Ready to help you learn and succeed!")

    def add_student_context(self, topic: str, context_type: str = "discussed"):
        """
        πŸ“ Track student learning context.

        Args:
            topic (str): What topic was covered
            context_type (str): Type of interaction (discussed, solved, focused)
        """

        if context_type == "discussed" and topic not in self.student_context["topics_discussed"]:
            self.student_context["topics_discussed"].append(topic)
        elif context_type == "solved":
            self.student_context["problems_solved"].append(topic)
        elif context_type == "focus":
            self.student_context["current_focus"] = topic

    def get_context_summary(self) -> str:
        """
        πŸ“Š Generate a summary of the student's learning context.

        Returns:
            str: Formatted context summary
        """

        context = self.student_context
        summary_parts = []

        if context["topics_discussed"]:
            topics = ", ".join(context["topics_discussed"][-3:])  # Last 3 topics
            summary_parts.append(f"πŸ“š Recently discussed: {topics}")

        if context["problems_solved"]:
            problems = ", ".join(context["problems_solved"][-2:])  # Last 2 problems
            summary_parts.append(f"βœ… Recently solved: {problems}")

        if context["current_focus"]:
            summary_parts.append(f"🎯 Current focus: {context['current_focus']}")

        summary_parts.append(f"πŸ“ˆ Learning stage: {context['learning_stage']}")

        return "\n".join(summary_parts) if summary_parts else "πŸ†• New conversation"

    def chat(self, user_message: str) -> str:
        """
        πŸ’¬ Have a natural conversation with memory and context.

        Args:
            user_message (str): Student's message

        Returns:
            str: Agent's helpful response
        """

        print(f"\nπŸ’¬ Student: {user_message}")
        print("πŸ€” Thinking...")

        # πŸ“ Add user message to conversation
        self.conversation_history.append({
            "role": "user",
            "content": user_message
        })

        # πŸ“Š Add context summary for the agent's awareness
        context_summary = self.get_context_summary()
        enhanced_message = f"""
Student Context: {context_summary}

Current Message: {user_message}
""".strip()

        try:
            # πŸ€– Get response with function calling capability
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=self.conversation_history + [
                    {"role": "system", "content": f"Context: {context_summary}"}
                ],
                tools=llm_zoomcamp_tools,
                tool_choice="auto",
                temperature=0.7  # Slightly more creative responses
            )

            assistant_message = response.choices[0].message

            # πŸ”§ Handle tool calls if needed
            if assistant_message.tool_calls:
                print("πŸ”§ Using tools to help answer...")

                # Add assistant message with tool calls
                self.conversation_history.append(assistant_message)

                # Execute tool calls
                for tool_call in assistant_message.tool_calls:
                    function_name = tool_call.function.name
                    function_args = json.loads(tool_call.function.arguments)

                    print(f"⚑ Using: {function_name}")

                    # Execute function
                    result = execute_function_call(function_name, function_args)

                    # Format result
                    if function_name == "search_faq" and result:
                        result_text = build_context_from_results(result)
                    else:
                        result_text = json.dumps(result, indent=2)

                    # Add tool result to conversation
                    self.conversation_history.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result_text
                    })

                # Get final response with tool results
                final_response = client.chat.completions.create(
                    model="gpt-4o-mini",
                    messages=self.conversation_history,
                    temperature=0.7
                )

                final_message = final_response.choices[0].message.content

            else:
                final_message = assistant_message.content

            # πŸ“ Add assistant response to conversation
            self.conversation_history.append({
                "role": "assistant",
                "content": final_message
            })

            # 🧠 Update student context based on conversation
            self._update_context_from_message(user_message, final_message)

            print(f"πŸŽ“ Agent: {final_message}")
            return final_message

        except Exception as e:
            error_msg = f"πŸ˜… Sorry, I encountered an error: {str(e)}. Let's try rephrasing your question!"
            print(f"❌ Error: {e}")
            return error_msg

    def _update_context_from_message(self, user_msg: str, agent_msg: str):
        """
        🧠 Update student context based on conversation content.

        Args:
            user_msg (str): What the student said
            agent_msg (str): How the agent responded
        """

        # 🎯 Simple keyword-based context tracking
        technical_topics = ["docker", "python", "environment", "setup", "installation", "database"]
        course_topics = ["assignment", "homework", "deadline", "enrollment", "requirements"]

        for topic in technical_topics + course_topics:
            if topic.lower() in user_msg.lower():
                self.add_student_context(topic, "discussed")

        # πŸŽ‰ Detect if a problem was solved
        if any(word in agent_msg.lower() for word in ["here's how", "solution", "try this", "steps:"]):
            for topic in technical_topics:
                if topic in user_msg.lower():
                    self.add_student_context(f"{topic} problem", "solved")

    def get_conversation_summary(self) -> str:
        """
        πŸ“Š Get a summary of the entire conversation.

        Returns:
            str: Conversation summary
        """

        user_messages = [msg["content"] for msg in self.conversation_history if msg["role"] == "user"]
        total_messages = len(user_messages)

        summary = f"""
πŸŽ“ LLM Zoomcamp Conversation Summary

πŸ’¬ Total messages: {total_messages}
πŸ“š Topics discussed: {len(self.student_context['topics_discussed'])}
βœ… Problems solved: {len(self.student_context['problems_solved'])}
πŸ“ˆ Learning stage: {self.student_context['learning_stage']}

πŸ—£οΈ Recent topics: {', '.join(self.student_context['topics_discussed'][-5:]) if self.student_context['topics_discussed'] else 'None yet'}
""".strip()

        return summary

# πŸ§ͺ Let's test our conversational agent!
print("\nπŸ§ͺ Testing LLM Zoomcamp Conversational Agent!")
print("="*70)

# Create agent instance
chat_agent = LLMZoomcampChatAgent()

# Have a conversation!
print("\nπŸ’¬ Starting conversation...")
response1 = chat_agent.chat("Hi! I'm new to the LLM Zoomcamp. Can you help me get started?")

print("\n" + "-"*40)
response2 = chat_agent.chat("I'm having trouble installing Docker. What should I do?")

print("\n" + "-"*40)
response3 = chat_agent.chat("Thanks! That was helpful. What's the next step after Docker?")

print("\n" + "-"*40)
print("πŸ“Š Conversation Summary:")
print(chat_agent.get_conversation_summary())
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Our conversational agent is like having a smart tutor who:

  1. πŸ’­ Remembers: Keeps track of your entire conversation
  2. πŸ“š Learns: Understands what topics you've discussed
  3. 🎯 Focuses: Knows what you're currently working on
  4. πŸ”§ Adapts: Uses tools when needed to help you
  5. 😊 Personalizes: Adjusts responses based on your learning stage

It's like having a study buddy who never forgets anything you've talked about! 🀝

🎨 Advanced UI Patterns (LLM Zoomcamp Visual Experience)

Let's create beautiful, interactive interfaces for our agentic systems! Perfect for Jupyter notebooks! 🎨

πŸ“± Creating Rich Jupyter Interfaces (LLM Zoomcamp Interactive Learning)

from IPython.display import display, HTML, Markdown
import time
from datetime import datetime

class LLMZoomcampUI:
    """
    🎨 Beautiful UI components for LLM Zoomcamp agents in Jupyter!

    This class creates gorgeous, interactive interfaces that make
    learning with AI agents a delightful experience! ✨
    """

    def __init__(self, theme: str = "zoomcamp"):
        """
        🎨 Initialize the UI system.

        Args:
            theme (str): UI theme ("zoomcamp", "dark", "light")
        """

        self.theme = theme
        self.conversation_id = f"conv_{int(time.time())}"

        # 🎭 Load theme styles
        self._load_theme_styles()

        print("🎨 LLM Zoomcamp UI system initialized!")
        print(f"✨ Using theme: {theme}")

    def _load_theme_styles(self):
        """🎭 Load CSS styles for the current theme."""

        if self.theme == "zoomcamp":
            self.styles = """
            <style>
            .llm-zoomcamp-container {
                font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
                max-width: 900px;
                margin: 20px auto;
                background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
                border-radius: 15px;
                padding: 25px;
                box-shadow: 0 10px 30px rgba(0,0,0,0.2);
                color: white;
            }

            .message-bubble {
                margin: 15px 0;
                padding: 15px 20px;
                border-radius: 20px;
                box-shadow: 0 3px 10px rgba(0,0,0,0.1);
                animation: slideIn 0.3s ease-out;
            }

            .user-message {
                background: rgba(255,255,255,0.9);
                color: #333;
                margin-left: 50px;
                border-bottom-right-radius: 5px;
            }

            .agent-message {
                background: rgba(255,255,255,0.1);
                color: white;
                margin-right: 50px;
                border-bottom-left-radius: 5px;
                border: 1px solid rgba(255,255,255,0.2);
            }

            .system-message {
                background: rgba(255,193,7,0.2);
                color: #fff3cd;
                text-align: center;
                margin: 10px 20px;
                font-size: 0.9em;
                border: 1px solid rgba(255,193,7,0.3);
            }

            .tool-execution {
                background: rgba(40,167,69,0.2);
                color: #d4edda;
                margin: 10px 30px;
                font-family: 'Courier New', monospace;
                font-size: 0.85em;
                border-left: 4px solid #28a745;
            }

            .header {
                text-align: center;
                margin-bottom: 30px;
                padding-bottom: 20px;
                border-bottom: 2px solid rgba(255,255,255,0.3);
            }

            .header h2 {
                margin: 0;
                font-size: 2.2em;
                text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
            }

            .timestamp {
                font-size: 0.8em;
                opacity: 0.7;
                margin-top: 5px;
            }

            .loading {
                display: inline-block;
                animation: spin 1s linear infinite;
            }

            @keyframes slideIn {
                from { opacity: 0; transform: translateY(10px); }
                to { opacity: 1; transform: translateY(0); }
            }

            @keyframes spin {
                0% { transform: rotate(0deg); }
                100% { transform: rotate(360deg); }
            }

            .emoji-large {
                font-size: 1.5em;
                margin-right: 10px;
            }
            </style>
            """

        # Display styles
        display(HTML(self.styles))

    def show_header(self, title: "str = \"πŸŽ“ LLM Zoomcamp AI Assistant\"):"
        """
        πŸ† Display a beautiful header for the interface.

        Args:
            title (str): Title to display
        """

        header_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="header">
                <h2>{title}</h2>
                <p>πŸ’¬ Your intelligent course companion for learning and problem-solving!</p>
                <div class="timestamp">πŸ•’ Session started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</div>
            </div>
        </div>
        """

        display(HTML(header_html))

    def show_user_message(self, message: str):
        """
        πŸ‘€ Display a user message bubble.

        Args:
            message (str): User's message
        """

        bubble_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="message-bubble user-message">
                <span class="emoji-large">πŸ‘€</span>
                <strong>You:</strong> {message}
                <div class="timestamp">πŸ“… {datetime.now().strftime('%H:%M:%S')}</div>
            </div>
        </div>
        """

        display(HTML(bubble_html))

    def show_agent_message(self, message: str, thinking_time: float = 0):
        """
        πŸ€– Display an agent response bubble.

        Args:
            message (str): Agent's response
            thinking_time (float): How long the agent "thought"
        """

        thinking_indicator = f" (πŸ’­ Thought for {thinking_time:.1f}s)" if thinking_time > 0 else ""

        bubble_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="message-bubble agent-message">
                <span class="emoji-large">πŸŽ“</span>
                <strong>LLM Zoomcamp Assistant:</strong><br>
                {message.replace('\n', '<br>')}
                <div class="timestamp">πŸ€– {datetime.now().strftime('%H:%M:%S')}{thinking_indicator}</div>
            </div>
        </div>
        """

        display(HTML(bubble_html))

    def show_tool_execution(self, tool_name: str, tool_args: dict, result_summary: str):
        """
        πŸ”§ Display tool execution information.

        Args:
            tool_name (str): Name of the tool used
            tool_args (dict): Arguments passed to the tool
            result_summary (str): Summary of the result
        """

        args_str = ", ".join([f"{k}='{v}'" for k, v in tool_args.items()])

        tool_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="tool-execution">
                <span class="emoji-large">πŸ”§</span>
                <strong>Tool Used:</strong> {tool_name}({args_str})<br>
                <strong>Result:</strong> {result_summary}
                <div class="timestamp">⚑ {datetime.now().strftime('%H:%M:%S')}</div>
            </div>
        </div>
        """

        display(HTML(tool_html))

    def show_system_message(self, message: str):
        """
        βš™οΈ Display a system/status message.

        Args:
            message (str): System message
        """

        system_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="message-bubble system-message">
                <span class="emoji-large">βš™οΈ</span>
                {message}
                <div class="timestamp">πŸ“Š {datetime.now().strftime('%H:%M:%S')}</div>
            </div>
        </div>
        """

        display(HTML(system_html))

    def show_loading(self, message: str = "πŸ€” Thinking..."):
        """
        ⏳ Display a loading message.

        Args:
            message (str): Loading message
        """

        loading_html = f"""
        <div class="llm-zoomcamp-container">
            <div class="message-bubble system-message">
                <span class="loading emoji-large">πŸ”„</span>
                {message}
            </div>
        </div>
        """

        display(HTML(loading_html))

# 🎨 Enhanced Chat Agent with Beautiful UI
class LLMZoomcampVisualAgent(LLMZoomcampChatAgent):
    """
    🎨 LLM Zoomcamp chat agent with beautiful visual interface!

    Combines all the intelligence of our chat agent with gorgeous UI! ✨
    """

    def __init__(self):
        """🎨 Initialize the visual agent."""

        super().__init__()
        self.ui = LLMZoomcampUI()

        # 🎭 Show beautiful header
        self.ui.show_header()
        self.ui.show_system_message("πŸš€ LLM Zoomcamp Visual Agent ready to help!")

    def visual_chat(self, user_message: str) -> str:
        """
        πŸ’¬ Chat with beautiful visual interface.

        Args:
            user_message (str): User's message

        Returns:
            str: Agent's response
        """

        # πŸ‘€ Show user message
        self.ui.show_user_message(user_message)

        # ⏳ Show loading
        self.ui.show_loading("πŸ€” Analyzing your question...")
        time.sleep(1)  # Brief pause for effect

        # πŸ•’ Track thinking time
        start_time = time.time()

        # πŸ“ Add to conversation history
        self.conversation_history.append({
            "role": "user",
            "content": user_message
        })

        try:
            # πŸ€– Get response with tools
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=self.conversation_history + [
                    {"role": "system", "content": f"Context: {self.get_context_summary()}"}
                ],
                tools=llm_zoomcamp_tools,
                tool_choice="auto",
                temperature=0.7
            )

            assistant_message = response.choices[0].message

            # πŸ”§ Handle tool usage
            if assistant_message.tool_calls:
                self.ui.show_system_message("πŸ”§ Using tools to find the best answer...")

                # Add assistant message
                self.conversation_history.append(assistant_message)

                # Execute tools
                for tool_call in assistant_message.tool_calls:
                    function_name = tool_call.function.name
                    function_args = json.loads(tool_call.function.arguments)

                    # Show tool execution
                    result = execute_function_call(function_name, function_args)

                    if function_name == "search_faq" and result:
                        result_summary = f"Found {len(result)} relevant FAQ entries"
                        result_text = build_context_from_results(result)
                    else:
                        result_summary = "Completed successfully"
                        result_text = json.dumps(result, indent=2)

                    self.ui.show_tool_execution(function_name, function_args, result_summary)

                    # Add to conversation
                    self.conversation_history.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result_text
                    })

                # Get final response
                final_response = client.chat.completions.create(
                    model="gpt-4o-mini",
                    messages=self.conversation_history,
                    temperature=0.7
                )

                final_message = final_response.choices[0].message.content

            else:
                final_message = assistant_message.content

            # πŸ“ Add to conversation
            self.conversation_history.append({
                "role": "assistant",
                "content": final_message
            })

            # πŸ•’ Calculate thinking time
            thinking_time = time.time() - start_time

            # πŸ€– Show agent response
            self.ui.show_agent_message(final_message, thinking_time)

            # 🧠 Update context
            self._update_context_from_message(user_message, final_message)

            return final_message

        except Exception as e:
            error_msg = f"πŸ˜… Sorry, I encountered an error: {str(e)}"
            self.ui.show_agent_message(error_msg)
            return error_msg

# πŸ§ͺ Demo the beautiful visual interface!
print("πŸ§ͺ Creating Beautiful LLM Zoomcamp Visual Demo!")
print("="*60)

# Note: This would work beautifully in a Jupyter notebook!
print("πŸ“ Visual demo code ready!")
print("🎨 Run this in Jupyter for the full visual experience!")

# Example usage:
demo_code = '''
# 🎨 In a Jupyter notebook, run this for beautiful visual chat:

visual_agent = LLMZoomcampVisualAgent()

# πŸ’¬ Start chatting with beautiful UI!
visual_agent.visual_chat("Hi! I need help setting up my development environment.")
visual_agent.visual_chat("What are the Docker requirements?")
visual_agent.visual_chat("Thanks! How do I troubleshoot if Docker fails to start?")
'''

print("πŸ“‹ Jupyter Notebook Demo Code:")
print(demo_code)
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Our visual interface makes learning with AI agents a joy! 🎨 It features:

  1. 🎭 Beautiful Themes: Professional, engaging visual design
  2. πŸ’¬ Chat Bubbles: Clear separation between user and agent messages
  3. πŸ”§ Tool Visualization: See exactly what tools are being used
  4. ⏳ Loading States: Smooth, engaging interaction flow
  5. πŸ“Š Status Messages: Clear system feedback and progress updates

Perfect for creating engaging educational experiences in Jupyter notebooks! ✨

πŸŽ“ Key Concepts Mastered in Part 2 (LLM Zoomcamp Advanced Skills)

Congratulations! You've now mastered advanced agentic AI development! πŸŽ‰ Here's what you've learned:

πŸ† Professional Function Calling (LLM Zoomcamp Industry Standard)

  • πŸ›‘οΈ Type Safety: Robust parameter validation and error handling
  • πŸ“ Documentation: Self-describing tools with clear specifications
  • ⚑ Performance: Optimized structured interactions with OpenAI
  • πŸ”§ Extensibility: Easy to add new tools and capabilities

πŸ”„ Iterative Search Strategies (LLM Zoomcamp Research Mastery)

  • 🧠 Strategic Planning: AI-driven search strategy development
  • πŸ” Multi-Query Search: Comprehensive topic exploration
  • πŸ“Š Result Synthesis: Combining multiple searches into coherent answers
  • 🎯 Context Management: Handling complex, multi-faceted questions

πŸ’¬ Conversational Intelligence (LLM Zoomcamp Communication Excellence)

  • πŸ’­ Memory Management: Persistent conversation context
  • πŸ“š Learning Tracking: Student progress and topic mastery
  • 🎯 Adaptive Responses: Context-aware, personalized interactions
  • 😊 Personality: Consistent, engaging agent behavior

🎨 Professional UI/UX (LLM Zoomcamp Visual Excellence)

  • πŸ–ΌοΈ Interactive Interfaces: Beautiful, engaging user experiences
  • πŸ“± Responsive Design: Jupyter-optimized visual components
  • ⚑ Real-time Feedback: Loading states and progress indicators
  • 🎭 Theme Systems: Customizable visual experiences

πŸš€ Advanced Patterns You've Implemented

  1. πŸ”§ Function Orchestration: Chaining multiple tools for complex tasks
  2. 🧠 Context-Aware Decision Making: Using conversation history for better responses
  3. πŸ“Š State Management: Tracking student progress and learning context
  4. 🎨 Visual Feedback Systems: Real-time UI updates and status indicators
  5. ⚑ Error Recovery: Graceful handling of failures and edge cases

🎯 Real-World Applications (LLM Zoomcamp Career Preparation)

The patterns you've learned are used in production systems like:

  • 🏒 Customer Support Chatbots: Multi-tool agents that can search knowledge bases
  • πŸŽ“ Educational Platforms: Personalized tutoring with progress tracking
  • πŸ’Ό Business Intelligence: Agents that can query databases and generate reports
  • πŸ” Research Assistants: Systems that can search multiple sources and synthesize information

πŸŽ“ LLM Zoomcamp Achievement: You now have the skills to build professional-grade agentic AI systems! πŸš€


πŸ“š Resources for Part 2


πŸŽ“ LLM Zoomcamp Tutorial Series - Part 2 Complete! πŸŽ‰

Ready for the ultimate challenge? Continue to Part 3 for enterprise-level patterns and production deployment! πŸš€

#LLMZoomcamp

Top comments (0)