DEV Community

Cover image for 10 Essential Practices for Building Robust LLM Applications
Vaibhav Singh
Vaibhav Singh

Posted on

10 Essential Practices for Building Robust LLM Applications

Introduction

I've been building LLM applications in the cloud. I've also seen a lot of developers making LLM apps which is super good for a MVP or prototype and needs some work to make it production ready. One or more of the listed practices when applied can help your application to scale in an effective manner. This article does not cover the entire software engineering aspect of the application development but rather only on the LLM wrapper applications. Also, the code snippets are in python and the same logic can be applied to other languages as well.

1. Leverage Middleware for Flexibility

Use middleware like LiteLLM or LangChain to avoid vendor lock-in and easily switch between models as they evolve.

Python:

from litellm import completion

response = completion(
    model="gpt-3.5-turbo", 
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)
Enter fullscreen mode Exit fullscreen mode

Middleware solutions like LiteLLM and LangChain provide a layer of abstraction between your application and various LLM providers. This abstraction allows you to easily switch between different models or providers without changing your core application code. As the AI landscape rapidly evolves, new models are frequently released with improved capabilities. By using middleware, you can quickly adopt these new models or switch providers based on performance, cost, or feature requirements, ensuring your application stays up-to-date and competitive.

2. Implement Retry Mechanisms

Avoid rate limit issues by implementing retry logic in your API calls.

Python:

import time
from openai import OpenAI

client = OpenAI()

def retry_api_call(max_retries=3, delay=1):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=[{"role": "user", "content": "Hello!"}]
            )
            return response
        except Exception as e:
            if attempt == max_retries - 1:
                raise e
            time.sleep(delay * (2 ** attempt))  # Exponential backoff
Enter fullscreen mode Exit fullscreen mode

LLM providers often impose rate limits to prevent abuse and ensure fair usage. Implementing a retry mechanism with exponential backoff helps your application gracefully handle temporary failures or rate limit errors. This approach increases the reliability of your application by automatically retrying failed requests, reducing the chances of service interruptions due to transient issues. The exponential backoff strategy (increasing the delay between retries) helps prevent overwhelming the API with immediate re-requests, which could exacerbate rate limiting issues.

3. Set Up LLM Provider Fallbacks

Don't rely on a single LLM provider. Implement fallbacks to handle quota issues or service disruptions.

from litellm import completion

def get_llm_response(prompt):
    providers = ['openai/gpt-3.5-turbo', 'anthropic/claude-2', 'cohere/command-nightly']
    for provider in providers:
        try:
            response = completion(model=provider, messages=[{"role": "user", "content": prompt}])
            return response
        except Exception as e:
            print(f"Error with {provider}: {str(e)}")
            continue
    raise Exception("All LLM providers failed")
Enter fullscreen mode Exit fullscreen mode

Relying on a single LLM provider can lead to service disruptions if that provider experiences downtime or if you hit quota limits. By implementing fallback options, you ensure continuous operation of your application. This approach also allows you to leverage the strengths of different providers or models for various tasks. LiteLLM simplifies this process by providing a unified interface for multiple providers, making it easier to switch between them or implement fallback logic.

4. Implement Observability

Use tools like Langfuse or Helicone for LLM tracing and debugging.

from langfuse.openai import OpenAI

client = OpenAI(
    api_key="your-openai-api-key",
    langfuse_public_key="your-langfuse-public-key",
    langfuse_secret_key="your-langfuse-secret-key"
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, AI!"}]
)
Enter fullscreen mode Exit fullscreen mode

Advantages of implementing observability:

  • Enhanced debugging: Easily trace and replay conversations to identify issues.
  • Performance optimization: Gain insights into response times and model performance.
  • Cost management: Track token usage and associated costs for better budget control.
  • Quality assurance: Monitor the quality of responses and identify areas for improvement.
  • User experience analysis: Understand user interactions and optimize prompts accordingly.
  • Compliance and auditing: Maintain logs for regulatory compliance and internal audits.
  • Anomaly detection: Quickly identify and respond to unusual patterns or behaviors.

Observability tools provide crucial insights into your LLM application's performance, usage patterns, and potential issues. They allow you to monitor and analyze interactions with LLMs in real-time, helping you optimize prompts, identify bottlenecks, and ensure the quality of AI-generated responses. This level of visibility is essential for maintaining, debugging, and improving your application over time.

5. Manage Prompts Effectively

Use prompt management tools with versioning instead of hardcoding prompts in your code or text files.

from promptflow import PromptFlow

pf = PromptFlow()

prompt_template = pf.get_prompt("greeting_prompt", version="1.2")
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": prompt_template.format(name="Alice")}]
)
Enter fullscreen mode Exit fullscreen mode

Effective prompt management is crucial for maintaining and improving your LLM application. By using dedicated prompt management tools, you can version control your prompts, A/B test different variations, and easily update them across your application. This approach separates prompt logic from application code, making it easier to iterate on prompts without changing the core application. It also enables non-technical team members to contribute to prompt improvements and allows for better collaboration in refining AI interactions.

6. Store Conversation History Persistently

Use a persistent cache like Redis for storing conversation history instead of in-memory cache which is not adapted for distributed systems.

from langchain.memory import RedisChatMessageHistory
from langchain.chains import ConversationChain
from langchain.llms import OpenAI

# Initialize Redis chat message history
message_history = RedisChatMessageHistory(url="redis://localhost:6379/0", ttl=600, session_id="user-123")

# Create a conversation chain with Redis memory
conversation = ConversationChain(
    llm=OpenAI(),
    memory=message_history,
    verbose=True
)

# Use the conversation
response = conversation.predict(input="Hi there!")
print(response)

# The conversation history is automatically stored in Redis
Enter fullscreen mode Exit fullscreen mode

Storing conversation history is essential for maintaining context in ongoing interactions and providing personalized experiences. Using a persistent cache like Redis, especially in distributed systems, ensures that conversation history is reliably stored and quickly accessible. This approach allows your application to scale horizontally while maintaining consistent user experiences across different instances or servers. The use of Redis with LangChain simplifies the integration of persistent memory into your conversational AI system, making it easier to build stateful, context-aware applications.

7. Use JSON Mode whenever possible

Whenever possible like extracting structured information, provide a JSON schema instead of relying on raw text output.

import openai

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo-1106",
    response_format={"type": "json_object"},
    messages=[
        {"role": "system", "content": "Extract the name and age from the user's input."},
        {"role": "user", "content": "My name is John and I'm 30 years old."}
    ]
)

print(response.choices[0].message.content)
# Output: {"name": "John", "age": 30}
Enter fullscreen mode Exit fullscreen mode

Using JSON mode for information extraction provides a structured and consistent output format, making it easier to parse and process the LLM's responses in your application. This approach reduces the need for complex post-processing of free-form text and minimizes the risk of misinterpretation. It's particularly useful for tasks like form filling, data extraction from unstructured text, or any scenario where you need to integrate AI-generated content into existing data structures or databases.

8. Set Up Credit Alerts

Implement alerts for prepaid credits and per-user credit checks, even in MVP stages.

def check_user_credits(user_id, requested_tokens):
    user_credits = get_user_credits(user_id)
    if user_credits < requested_tokens:
        raise InsufficientCreditsError(f"User {user_id} has insufficient credits")

    remaining_credits = user_credits - requested_tokens
    if remaining_credits < CREDIT_ALERT_THRESHOLD:
        send_low_credit_alert(user_id, remaining_credits)

    return True
Enter fullscreen mode Exit fullscreen mode

Implementing credit alerts and per-user credit checks is crucial for managing costs and ensuring fair usage in your LLM application. This system helps prevent unexpected expenses and allows you to proactively manage user access based on their credit limits. By setting up alerts at multiple thresholds, you can inform users or administrators before credits are depleted, ensuring uninterrupted service. This approach is valuable even in MVP stages, as it helps you understand usage patterns and plan for scaling your application effectively.

9. Implement Feedback Loops

Create mechanisms for users to provide feedback on AI responses, starting with simple thumbs up/down ratings.

def process_user_feedback(response_id, feedback):
    if feedback == 'thumbs_up':
        log_positive_feedback(response_id)
    elif feedback == 'thumbs_down':
        log_negative_feedback(response_id)
        trigger_improvement_workflow(response_id)

# In your API endpoint
@app.route('/feedback', methods=['POST'])
def submit_feedback():
    data = request.json
    process_user_feedback(data['response_id'], data['feedback'])
    return jsonify({"status": "Feedback received"})
Enter fullscreen mode Exit fullscreen mode

Implementing feedback loops is essential for continuously improving your LLM application. By allowing users to provide feedback on AI responses, you can identify areas where the model performs well and where it needs improvement. This data can be used to fine-tune models, adjust prompts, or implement additional safeguards. Starting with simple thumbs up/down ratings provides an easy way for users to give feedback, while more detailed feedback options can be added later for deeper insights. This approach helps in building trust with users and demonstrates your commitment to improving the AI's performance based on real-world usage.

10. Implement Guardrails

Use prompt guards to check for prompt injection attacks, toxic content, and off-topic responses.

import re
from better_profanity import profanity

def check_prompt_injection(input_text):
    injection_patterns = [
        r"ignore previous instructions",
        r"disregard all prior commands",
        r"override system prompt"
    ]
    for pattern in injection_patterns:
        if re.search(pattern, input_text, re.IGNORECASE):
            return True
    return False

def check_toxic_content(input_text):
    return profanity.contains_profanity(input_text)

def sanitize_input(input_text):
    if check_prompt_injection(input_text):
        raise ValueError("Potential prompt injection detected")

    if check_toxic_content(input_text):
        raise ValueError("Toxic content detected")

    # Additional checks can be added here (e.g., off-topic detection)

    return input_text  # Return sanitized input if all checks pass

# Usage
try:
    safe_input = sanitize_input(user_input)
    # Process safe_input with your LLM
except ValueError as e:
    print(f"Input rejected: {str(e)}")
Enter fullscreen mode Exit fullscreen mode

Implementing guardrails is crucial for ensuring the safety and reliability of your LLM application. This example demonstrates how to check for potential prompt injection attacks and toxic content. Prompt injection attacks attempt to override or bypass the system's intended behavior, while toxic content checks help maintain a safe and respectful environment. By implementing these checks, you can prevent malicious use of your AI system and ensure that the content generated aligns with your application's guidelines and ethical standards. Additional checks can be added to detect off-topic responses or other unwanted behaviors, further enhancing the robustness of your application.

Conclusion

All the above listed points can be easily integrated into your application and they prepare you better for scaling in production. You may also agree or disagree on some of the above points. In any case, feel free to post your questions or comments.

Top comments (5)

Collapse
 
axorax profile image
Axorax

thanks for sharing!

Collapse
 
bluedream0801 profile image
Declan

How are you
currently, we are going to build new coin marketplace similar pump.fun or base.fun.
If you have good experience with MERN Stack, I want to collaborate with you

Collapse
 
thibaultmardinli profile image
Thetoolist

This is amazing ;)

Collapse
 
piyushtechsavy profile image
piyush tiwari

Thanks for sharing. Any suggestion on preventing hallucinations in RAG

Collapse
 
vaibhav3002 profile image
Vaibhav Singh

Hi Piyush,

There are basic strategies to start with -

  1. By stating in the prompt to use only and only the context to answer.
  2. Lowering the temperature

But the real issue comes from the chunks (retrieved segments). A way to solve this is to add section headers to each chunk.

Have a look at this article d-star.ai/solving-the-out-of-conte...

There is code that goes along with it.