π LLM Zoomcamp Tutorial Series - Building Production-Ready Agentic Systems
Welcome to Part 3 of our LLM Zoomcamp tutorial series! π You've mastered the fundamentals and advanced techniques. Now it's time to build enterprise-grade systems that can handle real-world demands! This is where you become a professional AI engineer! πβ‘
ποΈ Enterprise Architecture Patterns (LLM Zoomcamp Production Level)
Building production agentic systems requires thoughtful architecture! Let's explore the patterns used by top tech companies. π’
π― The Agent Factory Pattern (LLM Zoomcamp Professional Design)
Instead of creating agents manually, let's build a factory that can create specialized agents on demand! π
# π Professional imports for enterprise development
import asyncio
import logging
from typing import Dict, List, Any, Optional, Callable, Union
from dataclasses import dataclass, field
from enum import Enum
from abc import ABC, abstractmethod
import json
import time
from datetime import datetime, timedelta
# π§ Configure professional logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger('llm_zoomcamp_enterprise')
class AgentType(Enum):
"""π Different types of agents we can create."""
GENERAL_ASSISTANT = "general_assistant"
TECHNICAL_SUPPORT = "technical_support"
COURSE_ADVISOR = "course_advisor"
FAQ_SPECIALIST = "faq_specialist"
TROUBLESHOOTER = "troubleshooter"
@dataclass
class AgentCapabilities:
"""π― Defines what an agent can do."""
can_search_faq: bool = True
can_add_faq: bool = False
can_troubleshoot: bool = False
can_advise_courses: bool = False
max_iterations: int = 3
temperature: float = 0.7
system_prompt: str = ""
personality_traits: List[str] = field(default_factory=list)
class LLMZoomcampAgentFactory:
"""
π Professional factory for creating specialized LLM Zoomcamp agents.
This factory can create different types of agents with specific capabilities,
just like a car factory that can build different models! π
"""
def __init__(self, client, tools: List[Dict], index, documents):
"""
ποΈ Initialize the agent factory.
Args:
client: OpenAI client instance
tools: Available function tools
index: Search index
documents: FAQ documents
"""
self.client = client
self.tools = tools
self.index = index
self.documents = documents
# π Track created agents for monitoring
self.active_agents = {}
self.agent_metrics = {}
logger.info("π LLM Zoomcamp Agent Factory initialized")
def create_agent(self, agent_type: AgentType, agent_id: Optional[str] = None) -> 'LLMZoomcampEnterpriseAgent':
"""
π Create a specialized agent based on type.
Args:
agent_type (AgentType): What kind of agent to create
agent_id (str, optional): Unique identifier for the agent
Returns:
LLMZoomcampEnterpriseAgent: Configured agent instance
"""
if agent_id is None:
agent_id = f"{agent_type.value}_{int(time.time())}"
# π― Get capabilities for this agent type
capabilities = self._get_capabilities_for_type(agent_type)
# π Create the agent
agent = LLMZoomcampEnterpriseAgent(
agent_id=agent_id,
agent_type=agent_type,
capabilities=capabilities,
client=self.client,
tools=self.tools,
index=self.index,
documents=self.documents
)
# π Register the agent
self.active_agents[agent_id] = agent
self.agent_metrics[agent_id] = {
"created_at": datetime.now(),
"interactions": 0,
"total_response_time": 0,
"errors": 0,
"successful_completions": 0
}
logger.info(f"π Created {agent_type.value} agent with ID: {agent_id}")
return agent
def _get_capabilities_for_type(self, agent_type: AgentType) -> AgentCapabilities:
"""
π― Configure capabilities based on agent type.
Args:
agent_type (AgentType): Type of agent
Returns:
AgentCapabilities: Configured capabilities
"""
if agent_type == AgentType.GENERAL_ASSISTANT:
return AgentCapabilities(
can_search_faq=True,
can_add_faq=True,
can_troubleshoot=True,
can_advise_courses=True,
max_iterations=5,
temperature=0.7,
system_prompt="""
π You are a knowledgeable, friendly LLM Zoomcamp teaching assistant!
Your mission: Help students succeed in their AI/ML learning journey.
Personality: π Encouraging, π― Focused, π‘ Insightful, π Motivating
Capabilities:
- π Search FAQ database for course information
- π Add new FAQ entries when you discover gaps
- π§ Troubleshoot technical problems step-by-step
- π Provide course guidance and learning paths
- π¬ Maintain helpful, engaging conversations
Always use emojis and be encouraging! Help students feel confident! π
""".strip(),
personality_traits=["encouraging", "knowledgeable", "patient", "thorough"]
)
elif agent_type == AgentType.TECHNICAL_SUPPORT:
return AgentCapabilities(
can_search_faq=True,
can_add_faq=True,
can_troubleshoot=True,
can_advise_courses=False,
max_iterations=3,
temperature=0.5, # More focused for technical issues
system_prompt="""
π§ You are a specialized technical support agent for LLM Zoomcamp!
Your expertise: Solving technical problems with precision and clarity.
Focus areas:
- π³ Docker installation and configuration
- π Python environment setup
- πΎ Database connectivity issues
- π API configuration and troubleshooting
- π οΈ Development environment problems
Approach:
1. π― Identify the exact problem
2. π Search for known solutions
3. π Provide step-by-step fixes
4. β
Verify the solution works
5. π Document new solutions for others
Be technical but clear. Always test your solutions! π§ͺ
""".strip(),
personality_traits=["analytical", "precise", "methodical", "helpful"]
)
elif agent_type == AgentType.COURSE_ADVISOR:
return AgentCapabilities(
can_search_faq=True,
can_add_faq=False,
can_troubleshoot=False,
can_advise_courses=True,
max_iterations=2,
temperature=0.8, # More creative for course guidance
system_prompt="""
π You are an expert course advisor for LLM Zoomcamp students!
Your mission: Guide students through their learning journey with wisdom and encouragement.
Specializations:
- π Course content and curriculum guidance
- π― Learning path recommendations
- β° Timeline and scheduling advice
- π Skill development strategies
- πΌ Career preparation insights
Approach:
- π€ Understand student goals and background
- πΊοΈ Create personalized learning roadmaps
- π Celebrate progress and milestones
- πͺ Provide motivation during challenges
- π Connect learning to career opportunities
Be inspiring and strategic! Help students see the bigger picture! β¨
""".strip(),
personality_traits=["inspiring", "strategic", "supportive", "visionary"]
)
elif agent_type == AgentType.FAQ_SPECIALIST:
return AgentCapabilities(
can_search_faq=True,
can_add_faq=True,
can_troubleshoot=False,
can_advise_courses=False,
max_iterations=2,
temperature=0.6,
system_prompt="""
π You are the FAQ specialist for LLM Zoomcamp!
Your mission: Maintain and improve the course knowledge base.
Responsibilities:
- π Find accurate answers in the FAQ database
- π Add new entries for common questions
- π― Ensure information is current and helpful
- π Identify knowledge gaps in our database
- β¨ Make complex topics accessible to beginners
Quality standards:
- β
Always cite sources from FAQ database
- π Write clear, beginner-friendly explanations
- π― Focus on actionable information
- π Update outdated information when found
Be the guardian of knowledge quality! π
""".strip(),
personality_traits=["meticulous", "organized", "clear", "authoritative"]
)
else: # TROUBLESHOOTER
return AgentCapabilities(
can_search_faq=True,
can_add_faq=True,
can_troubleshoot=True,
can_advise_courses=False,
max_iterations=4,
temperature=0.4, # Very focused for troubleshooting
system_prompt="""
π You are a specialized troubleshooting agent for LLM Zoomcamp!
Your superpower: Systematic problem-solving with detective-like precision.
Methodology:
1. π― Gather detailed problem information
2. π Search for similar reported issues
3. π§ͺ Propose systematic testing steps
4. π Analyze results and narrow possibilities
5. β
Provide validated solutions
6. π Document the solution for future cases
Problem-solving approach:
- π¬ Scientific method: hypothesis β test β analyze
- π Systematic elimination of causes
- π― Focus on reproducible solutions
- π Learn from each case to improve
Turn complex problems into step-by-step solutions! π οΈ
""".strip(),
personality_traits=["analytical", "systematic", "persistent", "logical"]
)
def get_agent_metrics(self, agent_id: str) -> Dict[str, Any]:
"""
π Get performance metrics for an agent.
Args:
agent_id (str): Agent identifier
Returns:
Dict: Performance metrics
"""
if agent_id not in self.agent_metrics:
return {"error": "Agent not found"}
metrics = self.agent_metrics[agent_id].copy()
# π Calculate derived metrics
if metrics["interactions"] > 0:
metrics["avg_response_time"] = metrics["total_response_time"] / metrics["interactions"]
metrics["success_rate"] = metrics["successful_completions"] / metrics["interactions"]
else:
metrics["avg_response_time"] = 0
metrics["success_rate"] = 0
metrics["uptime"] = datetime.now() - metrics["created_at"]
return metrics
def list_active_agents(self) -> List[Dict[str, Any]]:
"""
π List all active agents with basic info.
Returns:
List[Dict]: Agent information list
"""
agents_info = []
for agent_id, agent in self.active_agents.items():
metrics = self.get_agent_metrics(agent_id)
agents_info.append({
"agent_id": agent_id,
"agent_type": agent.agent_type.value,
"interactions": metrics["interactions"],
"success_rate": f"{metrics['success_rate']:.2%}",
"avg_response_time": f"{metrics['avg_response_time']:.2f}s",
"uptime": str(metrics["uptime"]).split('.')[0] # Remove microseconds
})
return agents_info
logger.info("π LLM Zoomcamp Agent Factory defined!")
print("β
Enterprise Agent Factory ready for production!")
π LLM Zoomcamp Explanation: The Factory Pattern is like having a smart manufacturing system! π Instead of hand-crafting each agent, our factory:
- π― Specializes Agents: Creates agents optimized for specific tasks
- π Tracks Performance: Monitors how well each agent performs
- π§ Manages Resources: Keeps track of all active agents
- βοΈ Standardizes Creation: Ensures consistent quality and capabilities
It's like having different types of doctors (general practitioner, specialist, surgeon) instead of just one! π¨ββ
π Enterprise Agent Implementation (LLM Zoomcamp Production Class)
Now let's build our enterprise-grade agent that can be created by the factory! πΌ
class LLMZoomcampEnterpriseAgent:
"""
π’ Enterprise-grade LLM Zoomcamp agent with full production features.
This agent includes monitoring, error handling, rate limiting,
and all the features you need for real-world deployment! π
"""
def __init__(self, agent_id: str, agent_type: AgentType, capabilities: AgentCapabilities,
client, tools: List[Dict], index, documents):
"""
ποΈ Initialize enterprise agent.
Args:
agent_id: Unique identifier
agent_type: Type of agent
capabilities: What this agent can do
client: OpenAI client
tools: Available function tools
index: Search index
documents: FAQ documents
"""
self.agent_id = agent_id
self.agent_type = agent_type
self.capabilities = capabilities
self.client = client
self.tools = tools
self.index = index
self.documents = documents
# π Conversation and state management
self.conversation_history = []
self.session_data = {}
self.last_activity = datetime.now()
# π Performance monitoring
self.performance_metrics = {
"total_requests": 0,
"successful_requests": 0,
"failed_requests": 0,
"total_response_time": 0,
"function_calls_made": 0
}
# π‘οΈ Rate limiting and safety
self.rate_limit_window = timedelta(minutes=1)
self.max_requests_per_window = 30
self.request_timestamps = []
# π Initialize with system prompt
self._initialize_conversation()
logger.info(f"π Enterprise agent {agent_id} ({agent_type.value}) initialized")
def _initialize_conversation(self):
"""π Set up the agent's personality and system prompt."""
self.conversation_history = [{
"role": "system",
"content": self.capabilities.system_prompt
}]
def _check_rate_limit(self) -> bool:
"""
π‘οΈ Check if we're within rate limits.
Returns:
bool: True if request is allowed, False if rate limited
"""
now = datetime.now()
# π§Ή Clean old timestamps
self.request_timestamps = [
ts for ts in self.request_timestamps
if now - ts < self.rate_limit_window
]
# β
Check if we can make another request
if len(self.request_timestamps) < self.max_requests_per_window:
self.request_timestamps.append(now)
return True
logger.warning(f"π‘οΈ Rate limit exceeded for agent {self.agent_id}")
return False
def _execute_function_safely(self, function_name: str, function_args: Dict[str, Any]) -> Any:
"""
π§ Execute function with comprehensive error handling.
Args:
function_name: Name of function to execute
function_args: Function arguments
Returns:
Function result or error information
"""
try:
logger.info(f"π§ Agent {self.agent_id} executing {function_name}")
# π Track function usage
self.performance_metrics["function_calls_made"] += 1
# π― Route to appropriate function
if function_name == "search_faq" and self.capabilities.can_search_faq:
return search_faq(**function_args)
elif function_name == "add_faq_entry" and self.capabilities.can_add_faq:
return add_faq_entry(**function_args)
else:
return {
"error": f"Function {function_name} not available for agent type {self.agent_type.value}"
}
except Exception as e:
logger.error(f"β Function execution error in agent {self.agent_id}: {e}")
return {"error": f"Function execution failed: {str(e)}"}
async def process_message_async(self, message: str) -> str:
"""
π Process message asynchronously for better performance.
Args:
message: User's message
Returns:
Agent's response
"""
# π‘οΈ Check rate limits
if not self._check_rate_limit():
return "π‘οΈ Please wait a moment - I'm handling many requests right now!"
# π Track request
start_time = time.time()
self.performance_metrics["total_requests"] += 1
self.last_activity = datetime.now()
try:
# π¬ Add user message
self.conversation_history.append({
"role": "user",
"content": message
})
# π€ Process with OpenAI
response = await asyncio.to_thread(
self.client.chat.completions.create,
model="gpt-4o-mini",
messages=self.conversation_history,
tools=self._get_available_tools(),
tool_choice="auto",
temperature=self.capabilities.temperature,
max_tokens=2000
)
assistant_message = response.choices[0].message
iteration = 0
# π Handle tool calls iteratively
while assistant_message.tool_calls and iteration < self.capabilities.max_iterations:
iteration += 1
logger.info(f"π Agent {self.agent_id} iteration {iteration}")
# Add assistant message with tool calls
self.conversation_history.append(assistant_message)
# Execute each tool call
for tool_call in assistant_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# π§ Execute function
result = self._execute_function_safely(function_name, function_args)
# Format result
if function_name == "search_faq" and isinstance(result, list):
result_text = build_context_from_results(result)
else:
result_text = json.dumps(result, indent=2)
# Add tool result
self.conversation_history.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result_text
})
# Get next response
response = await asyncio.to_thread(
self.client.chat.completions.create,
model="gpt-4o-mini",
messages=self.conversation_history,
tools=self._get_available_tools(),
tool_choice="auto",
temperature=self.capabilities.temperature,
max_tokens=2000
)
assistant_message = response.choices[0].message
# π Add final response
final_response = assistant_message.content
self.conversation_history.append({
"role": "assistant",
"content": final_response
})
# π Track success
response_time = time.time() - start_time
self.performance_metrics["successful_requests"] += 1
self.performance_metrics["total_response_time"] += response_time
logger.info(f"β
Agent {self.agent_id} completed request in {response_time:.2f}s")
return final_response
except Exception as e:
# π Track failure
self.performance_metrics["failed_requests"] += 1
logger.error(f"β Agent {self.agent_id} failed to process message: {e}")
return f"π
I encountered an error while processing your request: {str(e)}"
def process_message(self, message: str) -> str:
"""
π¬ Synchronous wrapper for message processing.
Args:
message: User's message
Returns:
Agent's response
"""
return asyncio.run(self.process_message_async(message))
def _get_available_tools(self) -> List[Dict]:
"""
π§ Get tools available to this agent based on capabilities.
Returns:
List of available tools
"""
available_tools = []
for tool in self.tools:
function_name = tool["function"]["name"]
# π― Check permissions
if function_name == "search_faq" and self.capabilities.can_search_faq:
available_tools.append(tool)
elif function_name == "add_faq_entry" and self.capabilities.can_add_faq:
available_tools.append(tool)
return available_tools
def get_performance_summary(self) -> Dict[str, Any]:
"""
π Get detailed performance metrics.
Returns:
Performance summary dictionary
"""
metrics = self.performance_metrics.copy()
# π Calculate derived metrics
if metrics["total_requests"] > 0:
metrics["success_rate"] = metrics["successful_requests"] / metrics["total_requests"]
metrics["average_response_time"] = metrics["total_response_time"] / metrics["successful_requests"] if metrics["successful_requests"] > 0 else 0
else:
metrics["success_rate"] = 0
metrics["average_response_time"] = 0
# π Add timing information
metrics["last_activity"] = self.last_activity.isoformat()
metrics["conversation_length"] = len([msg for msg in self.conversation_history if msg["role"] in ["user", "assistant"]])
return {
"agent_id": self.agent_id,
"agent_type": self.agent_type.value,
"capabilities": {
"can_search_faq": self.capabilities.can_search_faq,
"can_add_faq": self.capabilities.can_add_faq,
"can_troubleshoot": self.capabilities.can_troubleshoot,
"can_advise_courses": self.capabilities.can_advise_courses,
"max_iterations": self.capabilities.max_iterations
},
"performance": metrics,
"personality_traits": self.capabilities.personality_traits
}
def clear_conversation(self):
"""π§Ή Clear conversation history while keeping system prompt."""
user_assistant_messages = [
msg for msg in self.conversation_history
if msg["role"] in ["user", "assistant"]
]
logger.info(f"π§Ή Agent {self.agent_id} cleared {len(user_assistant_messages)} conversation messages")
self._initialize_conversation()
logger.info("π’ Enterprise Agent class defined!")
print("β
Production-ready agent implementation complete!")
π LLM Zoomcamp Explanation: Our enterprise agent is like a professional employee! π It has:
- π‘οΈ Rate Limiting: Prevents overuse and ensures fair resource sharing
- π Performance Monitoring: Tracks how well it's performing
- π Async Processing: Can handle multiple requests efficiently
- π― Capability-Based Security: Only uses tools it's authorized for
- π Memory Management: Maintains conversation context professionally
It's ready for production deployment! π
π Putting It All Together: LLM Zoomcamp Enterprise Demo (Professional Integration)
Let's see our enterprise system in action! π
async def run_enterprise_demo():
"""
π¬ Demonstrate the full LLM Zoomcamp enterprise system!
This shows how to use our factory to create specialized agents
and manage them in a production environment.
"""
print("π¬ LLM Zoomcamp Enterprise System Demo")
print("="*60)
# π Create the agent factory
# Note: In real implementation, you'd pass actual client, tools, index, documents
print("π Initializing Agent Factory...")
# For demo purposes, we'll simulate the components
factory = LLMZoomcampAgentFactory(
client=client, # Assume this exists from previous sections
tools=llm_zoomcamp_tools, # From previous sections
index=index, # From previous sections
documents=documents # From previous sections
)
print("β
Factory ready!")
# π Create different types of agents
print("\nπ Creating specialized agents...")
general_agent = factory.create_agent(AgentType.GENERAL_ASSISTANT, "general_001")
tech_agent = factory.create_agent(AgentType.TECHNICAL_SUPPORT, "tech_001")
advisor_agent = factory.create_agent(AgentType.COURSE_ADVISOR, "advisor_001")
print("β
Created 3 specialized agents!")
# π Show active agents
print("\nπ Active Agents:")
active_agents = factory.list_active_agents()
for agent_info in active_agents:
print(f" π {agent_info['agent_id']} ({agent_info['agent_type']})")
# π¬ Simulate conversations with different agents
print("\n㪠Testing Agent Interactions...")
# Test 1: General question to general assistant
print("\nπ€ General Assistant handling course inquiry:")
response1 = await general_agent.process_message_async(
"Hi! I'm new to LLM Zoomcamp. What should I know about getting started?"
)
print(f"π Response: {response1[:200]}...")
# Test 2: Technical issue to technical support
print("\nπ§ Technical Support handling Docker issue:")
response2 = await tech_agent.process_message_async(
"I'm having trouble with Docker not starting on my Windows machine. Can you help?"
)
print(f"π Response: {response2[:200]}...")
# Test 3: Course guidance to course advisor
print("\nπ Course Advisor providing learning guidance:")
response3 = await advisor_agent.process_message_async(
"I have 3 months to complete the course. How should I structure my learning?"
)
print(f"π Response: {response3[:200]}...")
# π Show performance metrics
print("\nπ Performance Metrics:")
for agent_id in ["general_001", "tech_001", "advisor_001"]:
metrics = factory.get_agent_metrics(agent_id)
print(f"\nπ Agent: {agent_id}")
print(f" π Interactions: {metrics['interactions']}")
print(f" β‘ Avg Response Time: {metrics['avg_response_time']:.2f}s")
print(f" β
Success Rate: {metrics['success_rate']:.2%}")
print("\nπ Enterprise demo complete!")
print("π System ready for production deployment!")
# π§ͺ Run the demo (in a real environment)
print("π§ͺ Enterprise Demo Code Ready!")
print("π Run 'asyncio.run(run_enterprise_demo())' to see it in action!")
# π‘ Production deployment example
deployment_example = '''
# π Production Deployment Example
from llm_zoomcamp_enterprise import LLMZoomcampAgentFactory, AgentType
# π§ Initialize with production configuration
factory = LLMZoomcampAgentFactory(
client=production_openai_client,
tools=production_tools,
index=production_search_index,
documents=production_faq_database
)
# π Create agents for different services
customer_support = factory.create_agent(
AgentType.TECHNICAL_SUPPORT,
"support_bot_prod"
)
course_guide = factory.create_agent(
AgentType.COURSE_ADVISOR,
"course_guide_prod"
)
# π Integrate with web API
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/chat/<agent_type>', methods=['POST'])
async def chat_endpoint(agent_type):
message = request.json.get('message')
# π― Route to appropriate agent
if agent_type == 'support':
response = await customer_support.process_message_async(message)
elif agent_type == 'advisor':
response = await course_guide.process_message_async(message)
else:
response = "Unknown agent type"
return jsonify({'response': response})
# π Start production server
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
'''
print("πΌ Production Deployment Example:")
print(deployment_example)
π LLM Zoomcamp Explanation: Our enterprise system is like running a professional AI consulting firm! π’ We have:
- π Factory Management: Creates the right specialist for each job
- π Specialized Agents: Each agent is an expert in their domain
- π Performance Monitoring: Track how well each agent performs
- π Async Processing: Handle multiple users simultaneously
- π API Integration: Ready to plug into web applications
Perfect for real-world deployment! π
π§ Production Frameworks & Tools (LLM Zoomcamp Industry Standards)
Let's explore the professional frameworks used in industry for building agentic systems! π
π― Toy AI Kit Integration (LLM Zoomcamp Framework Exploration)
"""
π§Έ Toy AI Kit Integration for LLM Zoomcamp
Toy AI Kit is a framework for building conversational AI agents.
Let's see how to integrate our LLM Zoomcamp patterns with it!
"""
# π¦ Example integration with toy_ai_kit patterns
class LLMZoomcampToyAIAgent:
"""
π§Έ LLM Zoomcamp agent compatible with Toy AI Kit patterns.
This shows how to structure agents for framework compatibility!
"""
def __init__(self, name: str, persona: str, tools: List[str]):
"""
π Initialize agent with toy_ai_kit style configuration.
Args:
name: Agent name
persona: Agent personality description
tools: List of available tool names
"""
self.name = name
self.persona = persona
self.available_tools = tools
self.conversation_state = {
"context": [],
"user_preferences": {},
"current_task": None
}
print(f"π§Έ Created {name} with persona: {persona}")
print(f"π§ Available tools: {', '.join(tools)}")
def process(self, message: str, context: Dict = None) -> Dict[str, Any]:
"""
π Process message with toy_ai_kit style interface.
Args:
message: User's message
context: Additional context
Returns:
Response dictionary with text and metadata
"""
# π Use persona in response generation
persona_prompt = f"""
You are {self.name}, {self.persona}
Available tools: {', '.join(self.available_tools)}
Current context: {context or 'None'}
User message: {message}
Respond in character and use tools if needed.
""".strip()
# π€ Generate response (simplified for demo)
response_text = f"As {self.name}, I'd say: Thanks for your message about '{message}'. Let me help you with that!"
return {
"text": response_text,
"agent_name": self.name,
"tools_used": [], # Would track actual tool usage
"confidence": 0.8,
"context_updated": True
}
# π§Έ Create LLM Zoomcamp agents with toy_ai_kit style
print("π§Έ Creating Toy AI Kit Style Agents...")
zoomcamp_helper = LLMZoomcampToyAIAgent(
name="ZoomcampHelper",
persona="a friendly and knowledgeable course assistant who loves helping students succeed in their AI learning journey",
tools=["search_faq", "add_faq_entry", "troubleshoot"]
)
docker_expert = LLMZoomcampToyAIAgent(
name="DockerExpert",
persona="a technical specialist focused on containerization and development environment setup",
tools=["search_faq", "troubleshoot", "system_check"]
)
print("β
Toy AI Kit style agents created!")
π― PydanticAI Pattern Integration (LLM Zoomcamp Advanced Frameworks)
"""
π― PydanticAI Integration for LLM Zoomcamp
PydanticAI is a modern framework for building AI agents with type safety.
Let's adapt our patterns for this powerful framework!
"""
from typing import Union, Literal
from pydantic import BaseModel, Field
# π Define structured data models
class LLMZoomcampQuery(BaseModel):
"""π― Structured query for LLM Zoomcamp assistance."""
question: str = Field(description="The student's question")
category: Literal["technical", "course", "general"] = Field(description="Question category")
urgency: Literal["low", "medium", "high"] = Field(default="medium", description="How urgent is this?")
student_level: Literal["beginner", "intermediate", "advanced"] = Field(default="beginner")
class LLMZoomcampResponse(BaseModel):
"""π Structured response from LLM Zoomcamp agent."""
answer: str = Field(description="The complete answer to the question")
source: Literal["faq", "knowledge", "search"] = Field(description="Where the answer came from")
confidence: float = Field(ge=0.0, le=1.0, description="Confidence in the answer")
follow_up_suggestions: list[str] = Field(default=[], description="Suggested follow-up questions")
tools_used: list[str] = Field(default=[], description="Tools that were used")
class LLMZoomcampAction(BaseModel):
"""π¬ Structured action for agent to take."""
action_type: Literal["search", "respond", "clarify", "escalate"] = Field(description="What action to take")
parameters: dict = Field(default={}, description="Action parameters")
reasoning: str = Field(description="Why this action was chosen")
# π― PydanticAI style agent
class LLMZoomcampPydanticAgent:
"""
π― LLM Zoomcamp agent built with PydanticAI patterns.
This agent uses structured data models for type safety and validation!
"""
def __init__(self, agent_name: str):
"""
π Initialize PydanticAI style agent.
Args:
agent_name: Name of the agent
"""
self.agent_name = agent_name
self.tools_registry = {
"search_faq": self._search_faq_tool,
"add_faq": self._add_faq_tool,
"troubleshoot": self._troubleshoot_tool
}
print(f"π― Created PydanticAI agent: {agent_name}")
def plan_action(self, query: LLMZoomcampQuery) -> LLMZoomcampAction:
"""
π§ Plan what action to take based on the query.
Args:
query: Structured query from student
Returns:
Planned action with reasoning
"""
# π― Simple planning logic (in real system, this would use LLM)
if query.category == "technical" and query.urgency == "high":
return LLMZoomcampAction(
action_type="search",
parameters={"query": query.question, "focus": "troubleshooting"},
reasoning="High urgency technical question requires immediate FAQ search"
)
elif query.category == "course":
return LLMZoomcampAction(
action_type="search",
parameters={"query": query.question, "focus": "course_info"},
reasoning="Course-related questions should search course materials"
)
else:
return LLMZoomcampAction(
action_type="respond",
parameters={"use_general_knowledge": True},
reasoning="General question can be answered with built-in knowledge"
)
def execute_action(self, action: LLMZoomcampAction, query: LLMZoomcampQuery) -> LLMZoomcampResponse:
"""
π Execute the planned action and generate response.
Args:
action: Action to execute
query: Original query
Returns:
Structured response
"""
tools_used = []
if action.action_type == "search":
# π Execute search tool
search_results = self.tools_registry["search_faq"](action.parameters)
tools_used.append("search_faq")
if search_results:
answer = f"Based on our FAQ database: {search_results}"
source = "faq"
confidence = 0.9
else:
answer = "I couldn't find specific information in our FAQ database."
source = "search"
confidence = 0.3
elif action.action_type == "respond":
# π§ Use general knowledge
answer = f"Based on general knowledge: {query.question} - This is a general response."
source = "knowledge"
confidence = 0.7
else:
answer = "I need to clarify your question before I can help."
source = "knowledge"
confidence = 0.5
# π‘ Generate follow-up suggestions
follow_ups = [
"Would you like more details on any specific part?",
"Is there anything else I can help you with?",
"Do you need help with the next steps?"
]
return LLMZoomcampResponse(
answer=answer,
source=source,
confidence=confidence,
follow_up_suggestions=follow_ups[:2], # Limit to 2
tools_used=tools_used
)
def process_query(self, query_dict: dict) -> dict:
"""
π Main processing method with full validation.
Args:
query_dict: Raw query dictionary
Returns:
Response dictionary
"""
try:
# β
Validate input
query = LLMZoomcampQuery(**query_dict)
# π§ Plan action
action = self.plan_action(query)
# π Execute action
response = self.execute_action(action, query)
return response.model_dump()
except Exception as e:
return {
"answer": f"Sorry, I encountered an error: {str(e)}",
"source": "error",
"confidence": 0.0,
"follow_up_suggestions": ["Please try rephrasing your question"],
"tools_used": []
}
def _search_faq_tool(self, params: dict) -> str:
"""π Mock search FAQ tool."""
query = params.get("query", "")
return f"Found relevant information about: {query}"
def _add_faq_tool(self, params: dict) -> str:
"""π Mock add FAQ tool."""
return "FAQ entry added successfully"
def _troubleshoot_tool(self, params: dict) -> str:
"""π§ Mock troubleshooting tool."""
return "Troubleshooting steps provided"
# π§ͺ Test PydanticAI style agent
print("\nπ§ͺ Testing PydanticAI Style Agent...")
pydantic_agent = LLMZoomcampPydanticAgent("ZoomcampPydanticBot")
# Test with structured query
test_query = {
"question": "How do I fix Docker installation issues on Windows?",
"category": "technical",
"urgency": "high",
"student_level": "beginner"
}
response = pydantic_agent.process_query(test_query)
print(f"π Question: {test_query['question']}")
print(f"π€ Response: {response['answer']}")
print(f"π·οΈ Source: {response['source']}")
print(f"β Confidence: {response['confidence']}")
print(f"π§ Tools used: {response['tools_used']}")
print(f"π‘ Follow-ups: {response['follow_up_suggestions']}")
print("β
PydanticAI integration demo complete!")
π LLM Zoomcamp Explanation: Professional frameworks give us:
- π― Type Safety: Structured data models prevent errors
- β Validation: Automatic checking of inputs and outputs
- π Documentation: Self-describing interfaces
- π§ Tool Integration: Standardized ways to add capabilities
- π Scalability: Patterns that work for large systems
These are the tools that professional AI engineers use! πΌ
π Enterprise Deployment Patterns (LLM Zoomcamp Production Readiness)
You've mastered the complete spectrum of agentic AI development! π Let's review your journey:
π From Beginner to Professional (LLM Zoomcamp Learning Path)
π Part 1 - Foundation: You learned basic RAG and decision-making
- β Search β Context β Generate pipeline
- β Agentic decision logic
- β JSON-based function calling
β‘ Part 2 - Advanced: You mastered professional function calling and iteration
- β OpenAI Function Calling API
- β Iterative search strategies
- β Conversational agents with memory
- β Beautiful UI components
π Part 3 - Enterprise: You built production-ready systems
- β Factory patterns for agent creation
- β Performance monitoring and rate limiting
- β Async processing and error handling
- β Framework integration patterns
π Professional Skills Achieved (LLM Zoomcamp Career Ready)
You now have the skills to build:
- π’ Enterprise Chatbots: Multi-agent systems for customer support
- π Educational Platforms: Personalized tutoring with progress tracking
- πΌ Business Intelligence: Agents that query databases and generate insights
- π Research Systems: Multi-source information gathering and synthesis
- π οΈ Technical Support: Automated troubleshooting with human escalation
π Next Steps in Your LLM Zoomcamp Journey
- π Keep Learning: Explore advanced topics like multi-modal agents
- π οΈ Build Projects: Create your own agentic systems for real problems
- π€ Join Community: Connect with other LLM Zoomcamp learners
- πΌ Apply Skills: Use these patterns in your work or personal projects
- π Teach Others: Share your knowledge with fellow learners
π LLM Zoomcamp Achievement Unlocked
ποΈ Agentic AI Engineer: You've mastered the complete stack from basic RAG to enterprise deployment!
Skills Certified:
- β RAG Pipeline Design
- β Function Calling Architecture
- β Conversational AI Development
- β Enterprise System Design
- β Production Deployment Patterns
Ready For:
- π’ Senior AI Engineer roles
- π Building production AI products
- π Leading AI training initiatives
- π‘ Innovating new agent architectures
π Congratulations, LLM Zoomcamp Graduate!
You've completed an incredible journey from basic concepts to enterprise-grade agentic systems! πβ¨
The patterns and techniques you've learned are used by top AI companies worldwide. You're now equipped to build the next generation of intelligent assistants! π
Keep building, keep learning, and keep pushing the boundaries of what's possible with AI! π
π Professional Resources
- π LLM Zoomcamp Community: Join the discussion
- π’ Enterprise AI Patterns: Microsoft AI Architecture
- π§ Production Frameworks: LangChain | LlamaIndex
- π Monitoring Tools: Weights & Biases | MLflow
- π Deployment Platforms: Azure OpenAI | AWS Bedrock
π LLM Zoomcamp Tutorial Series - Part 3 Complete! π
You are now a certified LLM Zoomcamp Agentic AI Engineer! Go build amazing things! πβ
#LLMZoomcamp
Top comments (0)