DEV Community

Abdelrahman Adnan
Abdelrahman Adnan

Posted on

🏭 Part 3: Professional - Enterprise Patterns & Production Deployment

πŸŽ“ LLM Zoomcamp Tutorial Series - Building Production-Ready Agentic Systems

Welcome to Part 3 of our LLM Zoomcamp tutorial series! πŸŽ“ You've mastered the fundamentals and advanced techniques. Now it's time to build enterprise-grade systems that can handle real-world demands! This is where you become a professional AI engineer! 🏭⚑


πŸ—οΈ Enterprise Architecture Patterns (LLM Zoomcamp Production Level)

Building production agentic systems requires thoughtful architecture! Let's explore the patterns used by top tech companies. 🏒

🎯 The Agent Factory Pattern (LLM Zoomcamp Professional Design)

Instead of creating agents manually, let's build a factory that can create specialized agents on demand! 🏭

# 🏭 Professional imports for enterprise development
import asyncio
import logging
from typing import Dict, List, Any, Optional, Callable, Union
from dataclasses import dataclass, field
from enum import Enum
from abc import ABC, abstractmethod
import json
import time
from datetime import datetime, timedelta

# πŸ”§ Configure professional logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger('llm_zoomcamp_enterprise')

class AgentType(Enum):
    """🎭 Different types of agents we can create."""
    GENERAL_ASSISTANT = "general_assistant"
    TECHNICAL_SUPPORT = "technical_support"
    COURSE_ADVISOR = "course_advisor"
    FAQ_SPECIALIST = "faq_specialist"
    TROUBLESHOOTER = "troubleshooter"

@dataclass
class AgentCapabilities:
    """🎯 Defines what an agent can do."""
    can_search_faq: bool = True
    can_add_faq: bool = False
    can_troubleshoot: bool = False
    can_advise_courses: bool = False
    max_iterations: int = 3
    temperature: float = 0.7
    system_prompt: str = ""
    personality_traits: List[str] = field(default_factory=list)

class LLMZoomcampAgentFactory:
    """
    🏭 Professional factory for creating specialized LLM Zoomcamp agents.

    This factory can create different types of agents with specific capabilities,
    just like a car factory that can build different models! πŸš—
    """

    def __init__(self, client, tools: List[Dict], index, documents):
        """
        πŸ—οΈ Initialize the agent factory.

        Args:
            client: OpenAI client instance
            tools: Available function tools
            index: Search index
            documents: FAQ documents
        """

        self.client = client
        self.tools = tools
        self.index = index
        self.documents = documents

        # πŸ“Š Track created agents for monitoring
        self.active_agents = {}
        self.agent_metrics = {}

        logger.info("🏭 LLM Zoomcamp Agent Factory initialized")

    def create_agent(self, agent_type: AgentType, agent_id: Optional[str] = None) -> 'LLMZoomcampEnterpriseAgent':
        """
        🎭 Create a specialized agent based on type.

        Args:
            agent_type (AgentType): What kind of agent to create
            agent_id (str, optional): Unique identifier for the agent

        Returns:
            LLMZoomcampEnterpriseAgent: Configured agent instance
        """

        if agent_id is None:
            agent_id = f"{agent_type.value}_{int(time.time())}"

        # 🎯 Get capabilities for this agent type
        capabilities = self._get_capabilities_for_type(agent_type)

        # 🎭 Create the agent
        agent = LLMZoomcampEnterpriseAgent(
            agent_id=agent_id,
            agent_type=agent_type,
            capabilities=capabilities,
            client=self.client,
            tools=self.tools,
            index=self.index,
            documents=self.documents
        )

        # πŸ“Š Register the agent
        self.active_agents[agent_id] = agent
        self.agent_metrics[agent_id] = {
            "created_at": datetime.now(),
            "interactions": 0,
            "total_response_time": 0,
            "errors": 0,
            "successful_completions": 0
        }

        logger.info(f"🎭 Created {agent_type.value} agent with ID: {agent_id}")
        return agent

    def _get_capabilities_for_type(self, agent_type: AgentType) -> AgentCapabilities:
        """
        🎯 Configure capabilities based on agent type.

        Args:
            agent_type (AgentType): Type of agent

        Returns:
            AgentCapabilities: Configured capabilities
        """

        if agent_type == AgentType.GENERAL_ASSISTANT:
            return AgentCapabilities(
                can_search_faq=True,
                can_add_faq=True,
                can_troubleshoot=True,
                can_advise_courses=True,
                max_iterations=5,
                temperature=0.7,
                system_prompt="""
πŸŽ“ You are a knowledgeable, friendly LLM Zoomcamp teaching assistant!

Your mission: Help students succeed in their AI/ML learning journey.

Personality: 😊 Encouraging, 🎯 Focused, πŸ’‘ Insightful, πŸš€ Motivating

Capabilities:
- πŸ” Search FAQ database for course information
- πŸ“ Add new FAQ entries when you discover gaps
- πŸ”§ Troubleshoot technical problems step-by-step  
- πŸŽ“ Provide course guidance and learning paths
- πŸ’¬ Maintain helpful, engaging conversations

Always use emojis and be encouraging! Help students feel confident! 🌟
""".strip(),
                personality_traits=["encouraging", "knowledgeable", "patient", "thorough"]
            )

        elif agent_type == AgentType.TECHNICAL_SUPPORT:
            return AgentCapabilities(
                can_search_faq=True,
                can_add_faq=True,
                can_troubleshoot=True,
                can_advise_courses=False,
                max_iterations=3,
                temperature=0.5,  # More focused for technical issues
                system_prompt="""
πŸ”§ You are a specialized technical support agent for LLM Zoomcamp!

Your expertise: Solving technical problems with precision and clarity.

Focus areas:
- 🐳 Docker installation and configuration
- 🐍 Python environment setup
- πŸ’Ύ Database connectivity issues
- πŸ”Œ API configuration and troubleshooting
- πŸ› οΈ Development environment problems

Approach:
1. 🎯 Identify the exact problem
2. πŸ” Search for known solutions
3. πŸ“ Provide step-by-step fixes
4. βœ… Verify the solution works
5. πŸ“š Document new solutions for others

Be technical but clear. Always test your solutions! πŸ§ͺ
""".strip(),
                personality_traits=["analytical", "precise", "methodical", "helpful"]
            )

        elif agent_type == AgentType.COURSE_ADVISOR:
            return AgentCapabilities(
                can_search_faq=True,
                can_add_faq=False,
                can_troubleshoot=False,
                can_advise_courses=True,
                max_iterations=2,
                temperature=0.8,  # More creative for course guidance
                system_prompt="""
πŸŽ“ You are an expert course advisor for LLM Zoomcamp students!

Your mission: Guide students through their learning journey with wisdom and encouragement.

Specializations:
- πŸ“š Course content and curriculum guidance
- 🎯 Learning path recommendations  
- ⏰ Timeline and scheduling advice
- πŸ† Skill development strategies
- πŸ’Ό Career preparation insights

Approach:
- 🀝 Understand student goals and background
- πŸ—ΊοΈ Create personalized learning roadmaps
- πŸŽ‰ Celebrate progress and milestones
- πŸ’ͺ Provide motivation during challenges
- πŸš€ Connect learning to career opportunities

Be inspiring and strategic! Help students see the bigger picture! ✨
""".strip(),
                personality_traits=["inspiring", "strategic", "supportive", "visionary"]
            )

        elif agent_type == AgentType.FAQ_SPECIALIST:
            return AgentCapabilities(
                can_search_faq=True,
                can_add_faq=True,
                can_troubleshoot=False,
                can_advise_courses=False,
                max_iterations=2,
                temperature=0.6,
                system_prompt="""
πŸ“š You are the FAQ specialist for LLM Zoomcamp!

Your mission: Maintain and improve the course knowledge base.

Responsibilities:
- πŸ” Find accurate answers in the FAQ database
- πŸ“ Add new entries for common questions
- 🎯 Ensure information is current and helpful
- πŸ“Š Identify knowledge gaps in our database
- ✨ Make complex topics accessible to beginners

Quality standards:
- βœ… Always cite sources from FAQ database
- πŸ“ Write clear, beginner-friendly explanations
- 🎯 Focus on actionable information
- πŸ”„ Update outdated information when found

Be the guardian of knowledge quality! πŸ“–
""".strip(),
                personality_traits=["meticulous", "organized", "clear", "authoritative"]
            )

        else:  # TROUBLESHOOTER
            return AgentCapabilities(
                can_search_faq=True,
                can_add_faq=True,
                can_troubleshoot=True,
                can_advise_courses=False,
                max_iterations=4,
                temperature=0.4,  # Very focused for troubleshooting
                system_prompt="""
πŸ” You are a specialized troubleshooting agent for LLM Zoomcamp!

Your superpower: Systematic problem-solving with detective-like precision.

Methodology:
1. 🎯 Gather detailed problem information
2. πŸ” Search for similar reported issues
3. πŸ§ͺ Propose systematic testing steps
4. πŸ“Š Analyze results and narrow possibilities
5. βœ… Provide validated solutions
6. πŸ“ Document the solution for future cases

Problem-solving approach:
- πŸ”¬ Scientific method: hypothesis β†’ test β†’ analyze
- πŸ“‹ Systematic elimination of causes
- 🎯 Focus on reproducible solutions
- πŸ“š Learn from each case to improve

Turn complex problems into step-by-step solutions! πŸ› οΈ
""".strip(),
                personality_traits=["analytical", "systematic", "persistent", "logical"]
            )

    def get_agent_metrics(self, agent_id: str) -> Dict[str, Any]:
        """
        πŸ“Š Get performance metrics for an agent.

        Args:
            agent_id (str): Agent identifier

        Returns:
            Dict: Performance metrics
        """

        if agent_id not in self.agent_metrics:
            return {"error": "Agent not found"}

        metrics = self.agent_metrics[agent_id].copy()

        # πŸ“ˆ Calculate derived metrics
        if metrics["interactions"] > 0:
            metrics["avg_response_time"] = metrics["total_response_time"] / metrics["interactions"]
            metrics["success_rate"] = metrics["successful_completions"] / metrics["interactions"]
        else:
            metrics["avg_response_time"] = 0
            metrics["success_rate"] = 0

        metrics["uptime"] = datetime.now() - metrics["created_at"]

        return metrics

    def list_active_agents(self) -> List[Dict[str, Any]]:
        """
        πŸ“‹ List all active agents with basic info.

        Returns:
            List[Dict]: Agent information list
        """

        agents_info = []

        for agent_id, agent in self.active_agents.items():
            metrics = self.get_agent_metrics(agent_id)

            agents_info.append({
                "agent_id": agent_id,
                "agent_type": agent.agent_type.value,
                "interactions": metrics["interactions"],
                "success_rate": f"{metrics['success_rate']:.2%}",
                "avg_response_time": f"{metrics['avg_response_time']:.2f}s",
                "uptime": str(metrics["uptime"]).split('.')[0]  # Remove microseconds
            })

        return agents_info

logger.info("🏭 LLM Zoomcamp Agent Factory defined!")
print("βœ… Enterprise Agent Factory ready for production!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: The Factory Pattern is like having a smart manufacturing system! 🏭 Instead of hand-crafting each agent, our factory:

  1. 🎯 Specializes Agents: Creates agents optimized for specific tasks
  2. πŸ“Š Tracks Performance: Monitors how well each agent performs
  3. πŸ”§ Manages Resources: Keeps track of all active agents
  4. βš™οΈ Standardizes Creation: Ensures consistent quality and capabilities

It's like having different types of doctors (general practitioner, specialist, surgeon) instead of just one! πŸ‘¨β€βšš

🎭 Enterprise Agent Implementation (LLM Zoomcamp Production Class)

Now let's build our enterprise-grade agent that can be created by the factory! πŸ’Ό

class LLMZoomcampEnterpriseAgent:
    """
    🏒 Enterprise-grade LLM Zoomcamp agent with full production features.

    This agent includes monitoring, error handling, rate limiting,
    and all the features you need for real-world deployment! πŸš€
    """

    def __init__(self, agent_id: str, agent_type: AgentType, capabilities: AgentCapabilities,
                 client, tools: List[Dict], index, documents):
        """
        πŸ—οΈ Initialize enterprise agent.

        Args:
            agent_id: Unique identifier
            agent_type: Type of agent
            capabilities: What this agent can do
            client: OpenAI client
            tools: Available function tools
            index: Search index
            documents: FAQ documents
        """

        self.agent_id = agent_id
        self.agent_type = agent_type
        self.capabilities = capabilities
        self.client = client
        self.tools = tools
        self.index = index
        self.documents = documents

        # πŸ’­ Conversation and state management
        self.conversation_history = []
        self.session_data = {}
        self.last_activity = datetime.now()

        # πŸ“Š Performance monitoring
        self.performance_metrics = {
            "total_requests": 0,
            "successful_requests": 0,
            "failed_requests": 0,
            "total_response_time": 0,
            "function_calls_made": 0
        }

        # πŸ›‘οΈ Rate limiting and safety
        self.rate_limit_window = timedelta(minutes=1)
        self.max_requests_per_window = 30
        self.request_timestamps = []

        # 🎭 Initialize with system prompt
        self._initialize_conversation()

        logger.info(f"🎭 Enterprise agent {agent_id} ({agent_type.value}) initialized")

    def _initialize_conversation(self):
        """🎭 Set up the agent's personality and system prompt."""

        self.conversation_history = [{
            "role": "system",
            "content": self.capabilities.system_prompt
        }]

    def _check_rate_limit(self) -> bool:
        """
        πŸ›‘οΈ Check if we're within rate limits.

        Returns:
            bool: True if request is allowed, False if rate limited
        """

        now = datetime.now()

        # 🧹 Clean old timestamps
        self.request_timestamps = [
            ts for ts in self.request_timestamps 
            if now - ts < self.rate_limit_window
        ]

        # βœ… Check if we can make another request
        if len(self.request_timestamps) < self.max_requests_per_window:
            self.request_timestamps.append(now)
            return True

        logger.warning(f"πŸ›‘οΈ Rate limit exceeded for agent {self.agent_id}")
        return False

    def _execute_function_safely(self, function_name: str, function_args: Dict[str, Any]) -> Any:
        """
        πŸ”§ Execute function with comprehensive error handling.

        Args:
            function_name: Name of function to execute
            function_args: Function arguments

        Returns:
            Function result or error information
        """

        try:
            logger.info(f"πŸ”§ Agent {self.agent_id} executing {function_name}")

            # πŸ“Š Track function usage
            self.performance_metrics["function_calls_made"] += 1

            # 🎯 Route to appropriate function
            if function_name == "search_faq" and self.capabilities.can_search_faq:
                return search_faq(**function_args)
            elif function_name == "add_faq_entry" and self.capabilities.can_add_faq:
                return add_faq_entry(**function_args)
            else:
                return {
                    "error": f"Function {function_name} not available for agent type {self.agent_type.value}"
                }

        except Exception as e:
            logger.error(f"❌ Function execution error in agent {self.agent_id}: {e}")
            return {"error": f"Function execution failed: {str(e)}"}

    async def process_message_async(self, message: str) -> str:
        """
        πŸš€ Process message asynchronously for better performance.

        Args:
            message: User's message

        Returns:
            Agent's response
        """

        # πŸ›‘οΈ Check rate limits
        if not self._check_rate_limit():
            return "πŸ›‘οΈ Please wait a moment - I'm handling many requests right now!"

        # πŸ“Š Track request
        start_time = time.time()
        self.performance_metrics["total_requests"] += 1
        self.last_activity = datetime.now()

        try:
            # πŸ’¬ Add user message
            self.conversation_history.append({
                "role": "user",
                "content": message
            })

            # πŸ€– Process with OpenAI
            response = await asyncio.to_thread(
                self.client.chat.completions.create,
                model="gpt-4o-mini",
                messages=self.conversation_history,
                tools=self._get_available_tools(),
                tool_choice="auto",
                temperature=self.capabilities.temperature,
                max_tokens=2000
            )

            assistant_message = response.choices[0].message
            iteration = 0

            # πŸ”„ Handle tool calls iteratively
            while assistant_message.tool_calls and iteration < self.capabilities.max_iterations:
                iteration += 1
                logger.info(f"πŸ”„ Agent {self.agent_id} iteration {iteration}")

                # Add assistant message with tool calls
                self.conversation_history.append(assistant_message)

                # Execute each tool call
                for tool_call in assistant_message.tool_calls:
                    function_name = tool_call.function.name
                    function_args = json.loads(tool_call.function.arguments)

                    # πŸ”§ Execute function
                    result = self._execute_function_safely(function_name, function_args)

                    # Format result
                    if function_name == "search_faq" and isinstance(result, list):
                        result_text = build_context_from_results(result)
                    else:
                        result_text = json.dumps(result, indent=2)

                    # Add tool result
                    self.conversation_history.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result_text
                    })

                # Get next response
                response = await asyncio.to_thread(
                    self.client.chat.completions.create,
                    model="gpt-4o-mini",
                    messages=self.conversation_history,
                    tools=self._get_available_tools(),
                    tool_choice="auto",
                    temperature=self.capabilities.temperature,
                    max_tokens=2000
                )

                assistant_message = response.choices[0].message

            # πŸ“ Add final response
            final_response = assistant_message.content
            self.conversation_history.append({
                "role": "assistant",
                "content": final_response
            })

            # πŸ“Š Track success
            response_time = time.time() - start_time
            self.performance_metrics["successful_requests"] += 1
            self.performance_metrics["total_response_time"] += response_time

            logger.info(f"βœ… Agent {self.agent_id} completed request in {response_time:.2f}s")
            return final_response

        except Exception as e:
            # πŸ“Š Track failure
            self.performance_metrics["failed_requests"] += 1
            logger.error(f"❌ Agent {self.agent_id} failed to process message: {e}")

            return f"πŸ˜… I encountered an error while processing your request: {str(e)}"

    def process_message(self, message: str) -> str:
        """
        πŸ’¬ Synchronous wrapper for message processing.

        Args:
            message: User's message

        Returns:
            Agent's response
        """

        return asyncio.run(self.process_message_async(message))

    def _get_available_tools(self) -> List[Dict]:
        """
        πŸ”§ Get tools available to this agent based on capabilities.

        Returns:
            List of available tools
        """

        available_tools = []

        for tool in self.tools:
            function_name = tool["function"]["name"]

            # 🎯 Check permissions
            if function_name == "search_faq" and self.capabilities.can_search_faq:
                available_tools.append(tool)
            elif function_name == "add_faq_entry" and self.capabilities.can_add_faq:
                available_tools.append(tool)

        return available_tools

    def get_performance_summary(self) -> Dict[str, Any]:
        """
        πŸ“Š Get detailed performance metrics.

        Returns:
            Performance summary dictionary
        """

        metrics = self.performance_metrics.copy()

        # πŸ“ˆ Calculate derived metrics
        if metrics["total_requests"] > 0:
            metrics["success_rate"] = metrics["successful_requests"] / metrics["total_requests"]
            metrics["average_response_time"] = metrics["total_response_time"] / metrics["successful_requests"] if metrics["successful_requests"] > 0 else 0
        else:
            metrics["success_rate"] = 0
            metrics["average_response_time"] = 0

        # πŸ•’ Add timing information
        metrics["last_activity"] = self.last_activity.isoformat()
        metrics["conversation_length"] = len([msg for msg in self.conversation_history if msg["role"] in ["user", "assistant"]])

        return {
            "agent_id": self.agent_id,
            "agent_type": self.agent_type.value,
            "capabilities": {
                "can_search_faq": self.capabilities.can_search_faq,
                "can_add_faq": self.capabilities.can_add_faq,
                "can_troubleshoot": self.capabilities.can_troubleshoot,
                "can_advise_courses": self.capabilities.can_advise_courses,
                "max_iterations": self.capabilities.max_iterations
            },
            "performance": metrics,
            "personality_traits": self.capabilities.personality_traits
        }

    def clear_conversation(self):
        """🧹 Clear conversation history while keeping system prompt."""
        user_assistant_messages = [
            msg for msg in self.conversation_history 
            if msg["role"] in ["user", "assistant"]
        ]

        logger.info(f"🧹 Agent {self.agent_id} cleared {len(user_assistant_messages)} conversation messages")
        self._initialize_conversation()

logger.info("🏒 Enterprise Agent class defined!")
print("βœ… Production-ready agent implementation complete!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Our enterprise agent is like a professional employee! πŸ‘” It has:

  1. πŸ›‘οΈ Rate Limiting: Prevents overuse and ensures fair resource sharing
  2. πŸ“Š Performance Monitoring: Tracks how well it's performing
  3. πŸ”„ Async Processing: Can handle multiple requests efficiently
  4. 🎯 Capability-Based Security: Only uses tools it's authorized for
  5. πŸ’­ Memory Management: Maintains conversation context professionally

It's ready for production deployment! πŸš€

πŸš€ Putting It All Together: LLM Zoomcamp Enterprise Demo (Professional Integration)

Let's see our enterprise system in action! 🎭

async def run_enterprise_demo():
    """
    🎬 Demonstrate the full LLM Zoomcamp enterprise system!

    This shows how to use our factory to create specialized agents
    and manage them in a production environment.
    """

    print("🎬 LLM Zoomcamp Enterprise System Demo")
    print("="*60)

    # 🏭 Create the agent factory
    # Note: In real implementation, you'd pass actual client, tools, index, documents
    print("🏭 Initializing Agent Factory...")

    # For demo purposes, we'll simulate the components
    factory = LLMZoomcampAgentFactory(
        client=client,  # Assume this exists from previous sections
        tools=llm_zoomcamp_tools,  # From previous sections
        index=index,    # From previous sections
        documents=documents  # From previous sections
    )

    print("βœ… Factory ready!")

    # 🎭 Create different types of agents
    print("\n🎭 Creating specialized agents...")

    general_agent = factory.create_agent(AgentType.GENERAL_ASSISTANT, "general_001")
    tech_agent = factory.create_agent(AgentType.TECHNICAL_SUPPORT, "tech_001")
    advisor_agent = factory.create_agent(AgentType.COURSE_ADVISOR, "advisor_001")

    print("βœ… Created 3 specialized agents!")

    # πŸ“‹ Show active agents
    print("\nπŸ“‹ Active Agents:")
    active_agents = factory.list_active_agents()
    for agent_info in active_agents:
        print(f"  🎭 {agent_info['agent_id']} ({agent_info['agent_type']})")

    # πŸ’¬ Simulate conversations with different agents
    print("\nπŸ’¬ Testing Agent Interactions...")

    # Test 1: General question to general assistant
    print("\nπŸ€– General Assistant handling course inquiry:")
    response1 = await general_agent.process_message_async(
        "Hi! I'm new to LLM Zoomcamp. What should I know about getting started?"
    )
    print(f"πŸ“ Response: {response1[:200]}...")

    # Test 2: Technical issue to technical support
    print("\nπŸ”§ Technical Support handling Docker issue:")
    response2 = await tech_agent.process_message_async(
        "I'm having trouble with Docker not starting on my Windows machine. Can you help?"
    )
    print(f"πŸ“ Response: {response2[:200]}...")

    # Test 3: Course guidance to course advisor
    print("\nπŸŽ“ Course Advisor providing learning guidance:")
    response3 = await advisor_agent.process_message_async(
        "I have 3 months to complete the course. How should I structure my learning?"
    )
    print(f"πŸ“ Response: {response3[:200]}...")

    # πŸ“Š Show performance metrics
    print("\nπŸ“Š Performance Metrics:")

    for agent_id in ["general_001", "tech_001", "advisor_001"]:
        metrics = factory.get_agent_metrics(agent_id)
        print(f"\n🎭 Agent: {agent_id}")
        print(f"  πŸ“ˆ Interactions: {metrics['interactions']}")
        print(f"  ⚑ Avg Response Time: {metrics['avg_response_time']:.2f}s")
        print(f"  βœ… Success Rate: {metrics['success_rate']:.2%}")

    print("\nπŸŽ‰ Enterprise demo complete!")
    print("πŸš€ System ready for production deployment!")

# πŸ§ͺ Run the demo (in a real environment)
print("πŸ§ͺ Enterprise Demo Code Ready!")
print("πŸš€ Run 'asyncio.run(run_enterprise_demo())' to see it in action!")

# πŸ’‘ Production deployment example
deployment_example = '''
# 🏭 Production Deployment Example

from llm_zoomcamp_enterprise import LLMZoomcampAgentFactory, AgentType

# πŸ”§ Initialize with production configuration
factory = LLMZoomcampAgentFactory(
    client=production_openai_client,
    tools=production_tools,
    index=production_search_index,
    documents=production_faq_database
)

# 🎭 Create agents for different services
customer_support = factory.create_agent(
    AgentType.TECHNICAL_SUPPORT, 
    "support_bot_prod"
)

course_guide = factory.create_agent(
    AgentType.COURSE_ADVISOR,
    "course_guide_prod"
)

# 🌐 Integrate with web API
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/chat/<agent_type>', methods=['POST'])
async def chat_endpoint(agent_type):
    message = request.json.get('message')

    # 🎯 Route to appropriate agent
    if agent_type == 'support':
        response = await customer_support.process_message_async(message)
    elif agent_type == 'advisor':
        response = await course_guide.process_message_async(message)
    else:
        response = "Unknown agent type"

    return jsonify({'response': response})

# πŸš€ Start production server
if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)
'''

print("πŸ’Ό Production Deployment Example:")
print(deployment_example)
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Our enterprise system is like running a professional AI consulting firm! 🏒 We have:

  1. 🏭 Factory Management: Creates the right specialist for each job
  2. 🎭 Specialized Agents: Each agent is an expert in their domain
  3. πŸ“Š Performance Monitoring: Track how well each agent performs
  4. πŸ”„ Async Processing: Handle multiple users simultaneously
  5. 🌐 API Integration: Ready to plug into web applications

Perfect for real-world deployment! πŸš€

πŸ”§ Production Frameworks & Tools (LLM Zoomcamp Industry Standards)

Let's explore the professional frameworks used in industry for building agentic systems! 🏭

🎯 Toy AI Kit Integration (LLM Zoomcamp Framework Exploration)

"""
🧸 Toy AI Kit Integration for LLM Zoomcamp

Toy AI Kit is a framework for building conversational AI agents.
Let's see how to integrate our LLM Zoomcamp patterns with it!
"""

# πŸ“¦ Example integration with toy_ai_kit patterns
class LLMZoomcampToyAIAgent:
    """
    🧸 LLM Zoomcamp agent compatible with Toy AI Kit patterns.

    This shows how to structure agents for framework compatibility!
    """

    def __init__(self, name: str, persona: str, tools: List[str]):
        """
        🎭 Initialize agent with toy_ai_kit style configuration.

        Args:
            name: Agent name
            persona: Agent personality description  
            tools: List of available tool names
        """

        self.name = name
        self.persona = persona
        self.available_tools = tools
        self.conversation_state = {
            "context": [],
            "user_preferences": {},
            "current_task": None
        }

        print(f"🧸 Created {name} with persona: {persona}")
        print(f"πŸ”§ Available tools: {', '.join(tools)}")

    def process(self, message: str, context: Dict = None) -> Dict[str, Any]:
        """
        πŸ”„ Process message with toy_ai_kit style interface.

        Args:
            message: User's message
            context: Additional context

        Returns:
            Response dictionary with text and metadata
        """

        # 🎭 Use persona in response generation
        persona_prompt = f"""
You are {self.name}, {self.persona}

Available tools: {', '.join(self.available_tools)}
Current context: {context or 'None'}

User message: {message}

Respond in character and use tools if needed.
""".strip()

        # πŸ€– Generate response (simplified for demo)
        response_text = f"As {self.name}, I'd say: Thanks for your message about '{message}'. Let me help you with that!"

        return {
            "text": response_text,
            "agent_name": self.name,
            "tools_used": [],  # Would track actual tool usage
            "confidence": 0.8,
            "context_updated": True
        }

# 🧸 Create LLM Zoomcamp agents with toy_ai_kit style
print("🧸 Creating Toy AI Kit Style Agents...")

zoomcamp_helper = LLMZoomcampToyAIAgent(
    name="ZoomcampHelper",
    persona="a friendly and knowledgeable course assistant who loves helping students succeed in their AI learning journey",
    tools=["search_faq", "add_faq_entry", "troubleshoot"]
)

docker_expert = LLMZoomcampToyAIAgent(
    name="DockerExpert",
    persona="a technical specialist focused on containerization and development environment setup",
    tools=["search_faq", "troubleshoot", "system_check"]
)

print("βœ… Toy AI Kit style agents created!")
Enter fullscreen mode Exit fullscreen mode

🎯 PydanticAI Pattern Integration (LLM Zoomcamp Advanced Frameworks)

"""
🎯 PydanticAI Integration for LLM Zoomcamp

PydanticAI is a modern framework for building AI agents with type safety.
Let's adapt our patterns for this powerful framework!
"""

from typing import Union, Literal
from pydantic import BaseModel, Field

# πŸ“ Define structured data models
class LLMZoomcampQuery(BaseModel):
    """🎯 Structured query for LLM Zoomcamp assistance."""

    question: str = Field(description="The student's question")
    category: Literal["technical", "course", "general"] = Field(description="Question category")
    urgency: Literal["low", "medium", "high"] = Field(default="medium", description="How urgent is this?")
    student_level: Literal["beginner", "intermediate", "advanced"] = Field(default="beginner")

class LLMZoomcampResponse(BaseModel):
    """πŸ“ Structured response from LLM Zoomcamp agent."""

    answer: str = Field(description="The complete answer to the question")
    source: Literal["faq", "knowledge", "search"] = Field(description="Where the answer came from")
    confidence: float = Field(ge=0.0, le=1.0, description="Confidence in the answer")
    follow_up_suggestions: list[str] = Field(default=[], description="Suggested follow-up questions")
    tools_used: list[str] = Field(default=[], description="Tools that were used")

class LLMZoomcampAction(BaseModel):
    """🎬 Structured action for agent to take."""

    action_type: Literal["search", "respond", "clarify", "escalate"] = Field(description="What action to take")
    parameters: dict = Field(default={}, description="Action parameters")
    reasoning: str = Field(description="Why this action was chosen")

# 🎯 PydanticAI style agent
class LLMZoomcampPydanticAgent:
    """
    🎯 LLM Zoomcamp agent built with PydanticAI patterns.

    This agent uses structured data models for type safety and validation!
    """

    def __init__(self, agent_name: str):
        """
        🎭 Initialize PydanticAI style agent.

        Args:
            agent_name: Name of the agent
        """

        self.agent_name = agent_name
        self.tools_registry = {
            "search_faq": self._search_faq_tool,
            "add_faq": self._add_faq_tool,
            "troubleshoot": self._troubleshoot_tool
        }

        print(f"🎯 Created PydanticAI agent: {agent_name}")

    def plan_action(self, query: LLMZoomcampQuery) -> LLMZoomcampAction:
        """
        🧠 Plan what action to take based on the query.

        Args:
            query: Structured query from student

        Returns:
            Planned action with reasoning
        """

        # 🎯 Simple planning logic (in real system, this would use LLM)
        if query.category == "technical" and query.urgency == "high":
            return LLMZoomcampAction(
                action_type="search",
                parameters={"query": query.question, "focus": "troubleshooting"},
                reasoning="High urgency technical question requires immediate FAQ search"
            )
        elif query.category == "course":
            return LLMZoomcampAction(
                action_type="search", 
                parameters={"query": query.question, "focus": "course_info"},
                reasoning="Course-related questions should search course materials"
            )
        else:
            return LLMZoomcampAction(
                action_type="respond",
                parameters={"use_general_knowledge": True},
                reasoning="General question can be answered with built-in knowledge"
            )

    def execute_action(self, action: LLMZoomcampAction, query: LLMZoomcampQuery) -> LLMZoomcampResponse:
        """
        πŸš€ Execute the planned action and generate response.

        Args:
            action: Action to execute
            query: Original query

        Returns:
            Structured response
        """

        tools_used = []

        if action.action_type == "search":
            # πŸ” Execute search tool
            search_results = self.tools_registry["search_faq"](action.parameters)
            tools_used.append("search_faq")

            if search_results:
                answer = f"Based on our FAQ database: {search_results}"
                source = "faq"
                confidence = 0.9
            else:
                answer = "I couldn't find specific information in our FAQ database."
                source = "search"
                confidence = 0.3

        elif action.action_type == "respond":
            # 🧠 Use general knowledge
            answer = f"Based on general knowledge: {query.question} - This is a general response."
            source = "knowledge"
            confidence = 0.7

        else:
            answer = "I need to clarify your question before I can help."
            source = "knowledge"
            confidence = 0.5

        # πŸ’‘ Generate follow-up suggestions
        follow_ups = [
            "Would you like more details on any specific part?",
            "Is there anything else I can help you with?",
            "Do you need help with the next steps?"
        ]

        return LLMZoomcampResponse(
            answer=answer,
            source=source,
            confidence=confidence,
            follow_up_suggestions=follow_ups[:2],  # Limit to 2
            tools_used=tools_used
        )

    def process_query(self, query_dict: dict) -> dict:
        """
        πŸ”„ Main processing method with full validation.

        Args:
            query_dict: Raw query dictionary

        Returns:
            Response dictionary
        """

        try:
            # βœ… Validate input
            query = LLMZoomcampQuery(**query_dict)

            # 🧠 Plan action
            action = self.plan_action(query)

            # πŸš€ Execute action
            response = self.execute_action(action, query)

            return response.model_dump()

        except Exception as e:
            return {
                "answer": f"Sorry, I encountered an error: {str(e)}",
                "source": "error",
                "confidence": 0.0,
                "follow_up_suggestions": ["Please try rephrasing your question"],
                "tools_used": []
            }

    def _search_faq_tool(self, params: dict) -> str:
        """πŸ” Mock search FAQ tool."""
        query = params.get("query", "")
        return f"Found relevant information about: {query}"

    def _add_faq_tool(self, params: dict) -> str:
        """πŸ“ Mock add FAQ tool."""
        return "FAQ entry added successfully"

    def _troubleshoot_tool(self, params: dict) -> str:
        """πŸ”§ Mock troubleshooting tool."""
        return "Troubleshooting steps provided"

# πŸ§ͺ Test PydanticAI style agent
print("\nπŸ§ͺ Testing PydanticAI Style Agent...")

pydantic_agent = LLMZoomcampPydanticAgent("ZoomcampPydanticBot")

# Test with structured query
test_query = {
    "question": "How do I fix Docker installation issues on Windows?",
    "category": "technical",
    "urgency": "high",
    "student_level": "beginner"
}

response = pydantic_agent.process_query(test_query)

print(f"πŸ“ Question: {test_query['question']}")
print(f"πŸ€– Response: {response['answer']}")
print(f"🏷️ Source: {response['source']}")
print(f"⭐ Confidence: {response['confidence']}")
print(f"πŸ”§ Tools used: {response['tools_used']}")
print(f"πŸ’‘ Follow-ups: {response['follow_up_suggestions']}")

print("βœ… PydanticAI integration demo complete!")
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ LLM Zoomcamp Explanation: Professional frameworks give us:

  1. 🎯 Type Safety: Structured data models prevent errors
  2. βœ… Validation: Automatic checking of inputs and outputs
  3. πŸ“‹ Documentation: Self-describing interfaces
  4. πŸ”§ Tool Integration: Standardized ways to add capabilities
  5. πŸš€ Scalability: Patterns that work for large systems

These are the tools that professional AI engineers use! πŸ’Ό

πŸŽ“ Enterprise Deployment Patterns (LLM Zoomcamp Production Readiness)

You've mastered the complete spectrum of agentic AI development! πŸ† Let's review your journey:

πŸš€ From Beginner to Professional (LLM Zoomcamp Learning Path)

🏁 Part 1 - Foundation: You learned basic RAG and decision-making

  • βœ… Search β†’ Context β†’ Generate pipeline
  • βœ… Agentic decision logic
  • βœ… JSON-based function calling

⚑ Part 2 - Advanced: You mastered professional function calling and iteration

  • βœ… OpenAI Function Calling API
  • βœ… Iterative search strategies
  • βœ… Conversational agents with memory
  • βœ… Beautiful UI components

🏭 Part 3 - Enterprise: You built production-ready systems

  • βœ… Factory patterns for agent creation
  • βœ… Performance monitoring and rate limiting
  • βœ… Async processing and error handling
  • βœ… Framework integration patterns

🌟 Professional Skills Achieved (LLM Zoomcamp Career Ready)

You now have the skills to build:

  1. 🏒 Enterprise Chatbots: Multi-agent systems for customer support
  2. πŸŽ“ Educational Platforms: Personalized tutoring with progress tracking
  3. πŸ’Ό Business Intelligence: Agents that query databases and generate insights
  4. πŸ” Research Systems: Multi-source information gathering and synthesis
  5. πŸ› οΈ Technical Support: Automated troubleshooting with human escalation

πŸš€ Next Steps in Your LLM Zoomcamp Journey

  1. πŸ“š Keep Learning: Explore advanced topics like multi-modal agents
  2. πŸ› οΈ Build Projects: Create your own agentic systems for real problems
  3. 🀝 Join Community: Connect with other LLM Zoomcamp learners
  4. πŸ’Ό Apply Skills: Use these patterns in your work or personal projects
  5. πŸŽ“ Teach Others: Share your knowledge with fellow learners

πŸ† LLM Zoomcamp Achievement Unlocked

πŸŽ–οΈ Agentic AI Engineer: You've mastered the complete stack from basic RAG to enterprise deployment!

Skills Certified:

  • βœ… RAG Pipeline Design
  • βœ… Function Calling Architecture
  • βœ… Conversational AI Development
  • βœ… Enterprise System Design
  • βœ… Production Deployment Patterns

Ready For:

  • 🏒 Senior AI Engineer roles
  • πŸš€ Building production AI products
  • πŸŽ“ Leading AI training initiatives
  • πŸ’‘ Innovating new agent architectures

πŸŽ‰ Congratulations, LLM Zoomcamp Graduate!

You've completed an incredible journey from basic concepts to enterprise-grade agentic systems! πŸŽ“βœ¨

The patterns and techniques you've learned are used by top AI companies worldwide. You're now equipped to build the next generation of intelligent assistants! πŸš€

Keep building, keep learning, and keep pushing the boundaries of what's possible with AI! 🌟


πŸ“š Professional Resources


πŸŽ“ LLM Zoomcamp Tutorial Series - Part 3 Complete! πŸŽ‰

You are now a certified LLM Zoomcamp Agentic AI Engineer! Go build amazing things! πŸš€β­

#LLMZoomcamp

Top comments (0)