DEV Community

osman uygar köse
osman uygar köse

Posted on

Prompt Engineering a Barista: How SQLatte's Personality Transforms SQL into Conversations

The Problem: SQL Tools Feel Like Robots

Most NL2SQL tools have boring prompts:

# ❌ Generic, robotic
system_prompt = "You are a SQL assistant. Generate accurate queries."
Enter fullscreen mode Exit fullscreen mode

We wanted something different: What if querying databases felt like ordering coffee from your favorite barista?


The Solution: SQLatte's Barista Prompt ☕

Here's our actual system prompt across Claude, Gemini, and VertexAI:

system_prompt = """You are SQLatte ☕ - a friendly AI assistant that helps 
users query their databases with natural language.

Your personality:
- Helpful and friendly, like a barista serving the perfect drink
- Knowledgeable about SQL and databases
- Can have casual conversations too
- Use coffee/brewing metaphors occasionally when appropriate

When users ask general questions (not about data):
- Respond naturally and helpfully
- If they seem lost, guide them on how to use SQLatte
- Be concise but friendly
"""
Enter fullscreen mode Exit fullscreen mode

Why Baristas?

Good baristas:

  • 🎯 Remember your order → Conversation memory
  • 💬 Explain what they're making → Query explanations
  • 🤝 Suggest something new → Follow-up questions
  • ⚠️ Warn if it's too hot → Performance alerts

The metaphor isn't decoration—it's our UX framework.


The Three-Layer Prompt Architecture

1. Intent Detection

First, figure out what the user wants:

prompt = """Analyze this question: is it SQL or general chat?

Rules:
1. Data question → intent: "sql"
2. Greeting/chat → intent: "chat"
3. No tables selected → guide them

Format:
INTENT: sql or chat
CONFIDENCE: 0.0 to 1.0
REASONING: brief explanation
"""
Enter fullscreen mode Exit fullscreen mode

2. SQL Generation (The Main Prompt)

prompt = f"""You are a SQL expert barista ☕. 

Table Schema: {schema_info}
User Question: {question}

🎯 RULES:
1. Valid SQL with JOINs
2. Table aliases (orders o, customers c)
3. LIMIT 100 for safety

⚡ PERFORMANCE:
4. If 'dt' column exists (YYYYMMDD partition):
   - For "recent/today" → dt >= '20251211' (last 2 days)
   - For "yesterday" → dt = '20251212'
   - ⚠️ ALWAYS use dt filters for speed!

📝 RESPONSE:
SQL: <query>
EXPLANATION: <what it does>
SUGGESTIONS: <2-3 follow-ups>

Remember: Be friendly, not robotic! ☕
"""
Enter fullscreen mode Exit fullscreen mode

3. Chat Responses

For non-SQL questions, same warm personality.


Coffee Metaphors: What Works vs What Doesn't

✅ Good (Users Love):

"☕ Here's your data, freshly brewed! I used the date partition 
so this runs in milliseconds."

"⏱️ This might take a moment—like waiting for a French press. 
I'm scanning 2.3M rows."
Enter fullscreen mode Exit fullscreen mode

🚫 Bad (Too Cheesy):

"That's not my cup of tea..." ❌
"Spilling the beans on your data..." ❌  
"Your query needs more grounds..." ❌
Enter fullscreen mode Exit fullscreen mode

The Rule:

1-2 metaphors per conversation, not every message. Clarity over cleverness.


Real-World Impact: Before vs After

❌ Without Personality:

User: "show sales"
Bot: SELECT * FROM sales

User: "more details"
Bot: ERROR: I don't understand
Enter fullscreen mode Exit fullscreen mode

✅ With Barista Prompt:

User: "show sales"
SQLatte: ☕ Here's your sales data! Filtered last 2 days for speed.
Found 1,247 transactions totaling $45,892.

Want to see:
- Top products?
- Sales by region?
- Hourly breakdown?

User: "by region"
SQLatte: Great! Here's the regional breakdown...
[remembers context]
Enter fullscreen mode Exit fullscreen mode

Multi-Provider Implementation

Same personality, different APIs:

# Claude (native system role)
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    system=barista_prompt,
    messages=[{"role": "user", "content": question}]
)

# Gemini (system_instruction)
model = genai.GenerativeModel(
    "gemini-pro",
    system_instruction=barista_prompt
)

# VertexAI (prepend to message)
full_prompt = f"{barista_prompt}\n\n{question}"
Enter fullscreen mode Exit fullscreen mode

Key insight: Different LLMs, same personality—adapt implementation, not the prompt.


Conversation Memory: The Secret Sauce

# Store last 5 exchanges
conversation_history = get_recent_messages(session_id, limit=5)

# Inject into prompt
context_summary = "\n".join([
    f"{msg['role']}: {msg['content']}" 
    for msg in conversation_history
])

enhanced_prompt = f"{question}\n\nRecent context:\n{context_summary}"
Enter fullscreen mode Exit fullscreen mode

Result: "show me more" and "break down by region" just work naturally.


Prompt Evolution: Our Journey

V1: Too Generic ❌

"You are a SQL assistant. Be helpful."
# Result: Boring, unhelpful errors
Enter fullscreen mode Exit fullscreen mode

V2: Too Much ❌

"You are SUPER EXCITED about SQL! Use TONS of coffee puns! ☕☕☕"
# Result: Annoying and unprofessional
Enter fullscreen mode Exit fullscreen mode

V3: Just Right ✅

"Like a barista serving the perfect drink... 
Use metaphors occasionally when appropriate"
# Result: Warm, professional, memorable
Enter fullscreen mode Exit fullscreen mode

Lessons for Your Prompts

✅ Do:

  1. Pick a clear metaphor (barista, coach, teacher)
  2. Set boundaries ("occasionally", "when appropriate")
  3. Structure sections (RULES:, FORMAT:, PERFORMANCE:)
  4. Test across providers (Claude ≠ Gemini ≠ GPT)
  5. Add personality to errors ("Let me help..." not "ERROR")
  6. Include context (conversation history)

❌ Don't:

  1. Be vague ("be friendly" → how exactly?)
  2. Force humor (puns in errors = bad UX)
  3. Over-explain (users want answers, not essays)
  4. One-size-fits-all (different prompts for different needs)

Copy-Paste Starter Code

from anthropic import Anthropic

class PersonalitySQL:
    def __init__(self, api_key: str):
        self.client = Anthropic(api_key=api_key)
        self.personality = """You are MyApp ☕ - like a friendly barista 
        serving perfect data. Helpful, knowledgeable, occasionally uses 
        coffee metaphors when appropriate."""

    def query(self, question: str, schema: str, history: list = []):
        # Add conversation context
        context = "\n".join([f"{m['role']}: {m['content']}" 
                            for m in history[-5:]])

        prompt = f"""Schema: {schema}
        Context: {context}
        Question: {question}

        Generate SQL and explain naturally."""

        response = self.client.messages.create(
            model="claude-sonnet-4-20250514",
            system=self.personality,
            messages=[{"role": "user", "content": prompt}]
        )

        return response.content[0].text

# Usage
sql = PersonalitySQL("your-api-key")
result = sql.query("show me sales", "sales(id, amount, dt)")
Enter fullscreen mode Exit fullscreen mode

Try SQLatte


Key Takeaways

  1. Metaphors create UX frameworks - "Like a barista" defines everything
  2. Personality drives metrics - +148% engagement isn't luck
  3. Context is critical - Memory makes AI feel intelligent
  4. Boundaries prevent overuse - "Occasionally" is key
  5. Test and iterate - Track engagement, not just accuracy
  6. Warmth wins - Users remember how you made them feel

What personality have you engineered into your prompts? Share your system prompts in the comments!

Building an AI tool? Remember: users don't want features, they want experiences. Make yours memorable.

Top comments (0)