π LLM Zoomcamp Tutorial Series - Building Agentic Assistants with OpenAI Function Calling
Welcome to Part 1 of our comprehensive LLM Zoomcamp tutorial series! π This is the foundation where you'll learn the core concepts of RAG (Retrieval Augmented Generation) and what makes a system "agentic". Perfect for beginners who want to understand how intelligent AI assistants work! π
π― Understanding the Core Problem (LLM Zoomcamp Challenge)
Welcome to your first LLM Zoomcamp agentic project! π Our goal is to create an intelligent assistant that can help course participants by leveraging Frequently Asked Questions (FAQ) documents. These FAQ documents contain question-answer pairs that provide valuable information about course enrollment, requirements, and procedures.
Think of it like having a smart study buddy who has read all the course materials! π
π― What We Want to Build:
- π Search through FAQ documents intelligently
- π§ Decide when to use external knowledge vs. built-in knowledge
- π Make multiple search iterations for complex queries
- π¬ Provide contextual, accurate responses
π€ What Makes a System "Agentic"? (LLM Zoomcamp Core Concept)
An agent in AI is like a smart assistant that can think and act independently! Here's what makes it special:
π Interacts with an environment (in our case, the chat dialogue)
π Observes and gathers information (through search functions)
πββοΈ Performs actions (searching, answering, adding entries)
π§ Maintains memory of past actions and context
β‘ Makes independent decisions about what to do next
The key difference between basic RAG and agentic RAG is decision-making autonomy! Instead of always searching or always using built-in knowledge, an agentic system can intelligently choose the best approach. π―
ποΈ Building Basic RAG Foundation (LLM Zoomcamp Step-by-Step)
Let's start by building the fundamental building blocks! Think of this as learning to walk before we run. πΆ
π οΈ Step 1: Setting Up Your LLM Zoomcamp Environment
# π¦ First, let's install the packages we need
# Think of these as your toolkit for building AI assistants!
pip install openai minsearch requests jupyter markdown
Now let's import our tools one by one:
# π Import the libraries (like getting books from a library)
import json # For working with data in JSON format
import requests # For downloading data from the internet
from openai import OpenAI # For talking to ChatGPT
from minsearch import AppendableIndex # For searching through documents
# π Initialize OpenAI client (this is your key to ChatGPT)
# Make sure you have OPENAI_API_KEY set in your environment!
client = OpenAI()
π LLM Zoomcamp Tip: Think of the OpenAI client as your telephone to ChatGPT. You'll use it to send questions and get answers!
π Step 2: Getting and Preparing Our LLM Zoomcamp Data
Now let's get some real FAQ data to work with! This is like downloading all the course materials. π₯
# π Step 2a: Download the FAQ documents from the internet
# This URL contains real FAQ data from data engineering courses
docs_url = 'https://github.com/alexeygrigorev/llm-rag-workshop/raw/main/notebooks/documents.json'
docs_response = requests.get(docs_url)
documents_raw = docs_response.json()
print("π₯ Downloaded FAQ data successfully!")
print(f"π Found {len(documents_raw)} courses with FAQ data")
π LLM Zoomcamp Explanation: We're downloading a JSON file that contains FAQ questions and answers from real courses. Think of it as a digital textbook! π
# π Step 2b: Transform the data into a format we can search
# We're "flattening" the data - turning nested data into a simple list
documents = []
for course in documents_raw: # Go through each course
course_name = course['course'] # Get the course name
for doc in course['documents']: # Go through each FAQ in that course
doc['course'] = course_name # Add course name to each FAQ
documents.append(doc) # Add to our main list
print(f"β
Processed {len(documents)} FAQ documents total!")
print("π Each document now has: question, answer, section, and course name")
π LLM Zoomcamp Explanation: Imagine you have several books (courses), each with many pages (documents). We're taking all the pages and putting them in one big stack, but we label each page with which book it came from! πβ‘οΈπ
# ποΈ Step 2c: Create our search index (like a super-smart filing cabinet)
index = AppendableIndex(
text_fields=["question", "text", "section"], # Fields we can search in
keyword_fields=["course"] # Fields for exact filtering
)
# π Put all our documents into the search index
index.fit(documents)
print("ποΈ Created search index successfully!")
print("π Now we can quickly find relevant FAQ answers!")
π LLM Zoomcamp Explanation: Think of this index like Google for your FAQ documents. Instead of reading every single document, we can ask "find me documents about Docker" and it will instantly find the relevant ones! β‘
π Step 3: Building Our LLM Zoomcamp Search Function
Now let's create a function that can search through our FAQ documents! This is like having a research assistant. π΅οΈββοΈ
def search(query):
"""
π Search the FAQ database for relevant entries.
Think of this as asking a librarian: "Can you find me books about Python?"
Args:
query (str): What the user wants to search for (like "Docker setup")
Returns:
list: A list of relevant FAQ entries, ranked by relevance
"""
# π― Step 3a: Set up boosting (some fields are more important)
# Questions are 3x more important than sections when matching
boost = {
'question': 3.0, # If the search term appears in a question, it's very relevant!
'section': 0.5 # If it appears in a section name, it's somewhat relevant
}
# π Step 3b: Actually perform the search
results = index.search(
query=query, # What to search for
filter_dict={'course': 'data-engineering-zoomcamp'}, # Only search in this course
boost_dict=boost, # Use our importance scoring
num_results=5, # Return top 5 matches
output_ids=True # Include document IDs
)
return results
# π§ͺ Let's test our search function!
test_results = search("How do I install Docker?")
print(f"π Found {len(test_results)} results for 'How do I install Docker?'")
# π Let's look at the first result
if test_results:
first_result = test_results[0]
print(f"π First result question: {first_result['question']}")
print(f"β Relevance score: {first_result.get('score', 'N/A')}")
π LLM Zoomcamp Explanation: Our search function is like a smart librarian who:
- π― Knows that questions are more important than section names
- π Only looks in the specific course we care about
- β Ranks results by how well they match
- π Returns the top 5 most relevant answers
ποΈ Step 4: Creating Our LLM Zoomcamp RAG Pipeline
Now we'll build the complete RAG system step by step! RAG = Retrieval + Augmented + Generation. ποΈ
# π Step 4a: Helper function to format search results
def build_context(search_results):
"""
ποΈ Build a context string from search results.
Think of this as organizing your research notes before writing an essay!
Args:
search_results (list): Results from our search function
Returns:
str: Nicely formatted context for the AI to use
"""
context = ""
# π Go through each search result and format it nicely
for doc in search_results:
context += f"section: {doc['section']}\n" # What section this is from
context += f"question: {doc['question']}\n" # The original question
context += f"answer: {doc['text']}\n\n" # The answer text
return context.strip() # Remove extra whitespace
# π§ͺ Let's test our context builder
test_results = search("Docker installation")
test_context = build_context(test_results)
print("π Built context from search results:")
print(test_context[:200] + "..." if len(test_context) > 200 else test_context)
π LLM Zoomcamp Explanation: The build_context
function is like organizing your research notes. Instead of giving ChatGPT a messy pile of information, we organize it neatly so the AI can easily understand and use it! π
# π€ Step 4b: Function to talk to ChatGPT
def llm(prompt):
"""
π€ Send a question to ChatGPT and get an answer back.
This is like having a conversation with a very smart assistant!
Args:
prompt (str): The complete question/instruction for ChatGPT
Returns:
str: ChatGPT's response
"""
# π Make the API call to OpenAI
response = client.chat.completions.create(
model='gpt-4o-mini', # Which AI model to use
messages=[{"role": "user", "content": prompt}] # Our question
)
# π₯ Extract the text response
return response.choices[0].message.content
print("π€ LLM function ready - we can now talk to ChatGPT!")
π LLM Zoomcamp Explanation: This function is your hotline to ChatGPT! You send it a prompt (like a detailed question), and it sends back ChatGPT's answer. Simple! πβ‘οΈπ€β‘οΈπ¬
# π― Step 4c: Create the main RAG function
def basic_rag(query):
"""
π― Our complete RAG pipeline: Search + Context + Generate Answer
This is the magic! We combine search results with AI to answer questions.
Args:
query (str): The user's question (like "How do I join the course?")
Returns:
str: A complete, helpful answer
"""
# π Step 1: Search for relevant information
print(f"π Searching for: {query}")
search_results = search(query)
# π Step 2: Build context from search results
print(f"π Found {len(search_results)} relevant documents")
context = build_context(search_results)
# π Step 3: Create a detailed prompt for ChatGPT
prompt_template = """
You're a helpful course teaching assistant for the LLM Zoomcamp! π
Your job is to answer the QUESTION based on the CONTEXT from our FAQ database.
Only use facts from the CONTEXT when answering the QUESTION.
<QUESTION>
{question}
</QUESTION>
<CONTEXT>
{context}
</CONTEXT>
Please provide a helpful, detailed answer! π
""".strip()
# π Step 4: Fill in the template with our data
prompt = prompt_template.format(question=query, context=context)
print("π Created prompt for ChatGPT")
# π€ Step 5: Get answer from ChatGPT
print("π€ Getting answer from ChatGPT...")
answer = llm(prompt)
return answer
# π§ͺ Let's test our complete RAG system!
print("π§ͺ Testing our LLM Zoomcamp RAG system!")
test_question = "How do I join the course?"
answer = basic_rag(test_question)
print(f"\nβ Question: {test_question}")
print(f"β
Answer: {answer}")
π LLM Zoomcamp Explanation: Our basic_rag
function is like having a research assistant who:
- π Searches through all course materials
- π Organizes the relevant information
- π Asks ChatGPT a well-structured question
- β Returns a helpful answer based on real course data!
π§ Making RAG Agentic: Decision-Making Capabilities (LLM Zoomcamp Advanced)
The basic RAG always searches first, then answers. But what if we want our system to be smarter? π€ An agentic system should decide whether to search or use its own knowledge. Let's make it intelligent! π§
π Enhanced Agentic Prompt (LLM Zoomcamp Magic)
# π This is our "smart prompt" that teaches ChatGPT to make decisions
agentic_prompt_template = """
π You're a course teaching assistant for the LLM Zoomcamp!
You're given a QUESTION from a student. You have three superpowers:
1. π Answer using the provided CONTEXT (if available and good enough)
2. π§ Use your own knowledge if CONTEXT is EMPTY or not helpful
3. π Request a search of the FAQ database if you need more info
Current CONTEXT: {context}
<QUESTION>
{question}
</QUESTION>
π If CONTEXT is EMPTY or you need more information, respond with:
{{
"action": "SEARCH",
"reasoning": "Explain why you need to search the FAQ database"
}}
π If you can answer using CONTEXT, respond with:
{{
"action": "ANSWER",
"answer": "Your detailed, helpful answer here",
"source": "CONTEXT"
}}
π§ If CONTEXT isn't helpful but you can answer from your knowledge:
{{
"action": "ANSWER",
"answer": "Your detailed, helpful answer here",
"source": "OWN_KNOWLEDGE"
}}
Remember: Always be helpful and explain things clearly! π
""".strip()
print("π Created our intelligent agentic prompt!")
print("β¨ Now ChatGPT can decide what to do instead of always searching!")
π LLM Zoomcamp Explanation: This prompt is like giving ChatGPT a decision-making flowchart! Instead of always doing the same thing, it can now choose the best action based on the situation. It's like upgrading from a calculator to a smartphone! π±
π Implementing Agentic Decision Logic (LLM Zoomcamp Step-by-Step)
Now let's build our smart assistant that can make decisions! This is where the magic happens! β¨
def agentic_rag_v1(question):
"""
π First version of our smart agentic RAG system!
This assistant can decide whether to search or use its own knowledge.
Args:
question (str): The student's question
Returns:
dict: The assistant's response with source information
"""
# π¬ Step 1: Start with empty context (no information yet)
print(f"π¬ Starting with question: {question}")
context = "EMPTY"
# π Step 2: Create prompt and ask ChatGPT what to do
prompt = agentic_prompt_template.format(question=question, context=context)
print("π€ Asking ChatGPT to make a decision...")
# π€ Step 3: Get ChatGPT's decision
answer_json = llm(prompt)
answer = json.loads(answer_json) # Convert JSON string to Python dictionary
print(f"π§ ChatGPT decided: {answer['action']}")
# π Step 4: If ChatGPT wants to search, let's do it!
if answer['action'] == 'SEARCH':
print(f"π Reason for searching: {answer['reasoning']}")
print("π Performing search...")
# Search the FAQ database
search_results = search(question)
context = build_context(search_results)
print(f"β
Found {len(search_results)} relevant documents")
# Ask ChatGPT again, now with context
prompt = agentic_prompt_template.format(question=question, context=context)
print("π€ Asking ChatGPT again with search results...")
answer_json = llm(prompt)
answer = json.loads(answer_json)
print(f"β¨ Final decision: {answer['action']}")
return answer
# π§ͺ Let's test our smart assistant!
print("π§ͺ Testing LLM Zoomcamp Agentic Assistant!")
print("\n" + "="*50)
# Test 1: Course-specific question (should search)
print("π Test 1: Course-specific question")
result1 = agentic_rag_v1("How do I join the LLM Zoomcamp course?")
print(f"π Answer: {result1['answer'][:200]}...")
print(f"π·οΈ Source: {result1['source']}")
print("\n" + "="*50)
# Test 2: General knowledge question (should use own knowledge)
print("π Test 2: General knowledge question")
result2 = agentic_rag_v1("How do I install Python on my computer?")
print(f"π Answer: {result2['answer'][:200]}...")
print(f"π·οΈ Source: {result2['source']}")
π LLM Zoomcamp Explanation: Our smart assistant works like this:
- π€ Think First: "Do I need to search, or do I already know this?"
- π Search If Needed: If it's about the course, search the FAQ
- π§ Use Knowledge: If it's general knowledge, answer directly
- π Always Cite Sources: Tell us where the answer came from!
It's like having a study buddy who knows when to check the textbook vs. when they already know the answer! π
π Key Concepts Introduced in Part 1 (LLM Zoomcamp Fundamentals)
Congratulations! You've just built your first intelligent agent! π Let's review what you've learned:
-
ποΈ RAG Pipeline: Search β Context Building β LLM Query
- Like having a research assistant who finds info, organizes it, and writes an answer
-
π§ Agentic Decision Making: LLM chooses actions based on available information
- Your assistant can now think: "Should I search or do I already know this?"
-
π Structured Output: Using JSON format for consistent action parsing
- Like having a standard form for the AI to fill out its decisions
-
ποΈ Context Management: Handling empty vs. populated context states
- Knowing when you have enough information vs. when you need more
-
π·οΈ Source Attribution: Tracking whether answers come from FAQ or general knowledge
- Always citing your sources - good academic practice! π
π€ Understanding Agent Behavior (LLM Zoomcamp Insights)
Your agentic system now exhibits intelligent behavior! π§ β¨
-
π For course-specific questions: Recognizes need to search FAQ database
- "How do I join the course?" β π Search FAQ β π Answer from course materials
-
π For general questions: Uses built-in knowledge without unnecessary searches
- "How do I install Python?" β π§ Use own knowledge β π¬ Direct answer
-
π― Context awareness: Makes decisions based on available information
- Knows the difference between "I have info" vs. "I need to find info"
-
π Reasoning: Provides explanations for its chosen actions
- Not just doing things, but explaining WHY it's doing them
π LLM Zoomcamp Achievement Unlocked: You now understand the fundamental difference between basic RAG and agentic RAG! Your assistant doesn't just follow a script - it makes intelligent decisions! π
π What's Next?
This foundation prepares you for more sophisticated agentic behaviors in Part 2, where we'll implement:
- π Iterative search strategies that explore topics deeply
- β‘ OpenAI Function Calling for professional tool integration
- π¬ Conversational agents with memory
- π¨ Beautiful user interfaces
Ready to level up to Part 2? πβ¨
π Resources for Part 1
- π LLM Zoomcamp Course: Main Repository
- π§ Workshop Code: rag-agents-workshop
- π OpenAI Documentation: Chat Completions
- π MinSearch Library: Simple search engine
π LLM Zoomcamp Tutorial Series - Part 1 Complete! π
Continue your journey with Part 2 to master advanced function calling and iterative search! π
#LLMZoomcamp
Top comments (0)