DEV Community

Leo Laish
Leo Laish

Posted on

AYW + OpenAI Integration: A Developer's Guide

AYW + OpenAI Integration: A Developer's Guide

A step-by-step guide to integrating OpenAI's GPT models into your AYW chatbot platform. Build intelligent, context-aware responses in 40 minutes.

Published: June 6, 2026 | 10 min read | Part 3 of AYW Dev.to Series

Why OpenAI + AYW?

AYW's human-guided AI approach means we don't just "pipe" user messages to OpenAI. Instead, we:

  • Route by intent (which specialist bot should respond?)
  • Inject human-guided system prompts (persona-specific instructions)
  • Manage conversation context (last 10 messages)
  • Handle errors gracefully (fallback responses, rate limits)
  • Track token usage (cost control)

Let's build it.


Prerequisites (5 Minutes)

1. Get Your OpenAI API Key

  1. Go to OpenAI Platform
  2. Create an account or sign in
  3. Navigate to API KeysCreate new secret key
  4. Copy the key (starts with sk-)

2. Install Dependencies

cd apps/backend
npm install openai
Enter fullscreen mode Exit fullscreen mode

3. Add Environment Variables

In apps/backend/.env:

OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4o-mini  # or gpt-4o, gpt-3.5-turbo
Enter fullscreen mode Exit fullscreen mode

Part 1: Creating the OpenAI Service

Create apps/backend/src/services/openaiService.ts:

import OpenAI from 'openai';
import { MessageDTO } from '@ayw/shared';

export class OpenAIService {
  private client: OpenAI;
  private model: string;

  constructor() {
    this.client = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY
    });
    this.model = process.env.OPENAI_MODEL || 'gpt-4o-mini';
  }

  async generateResponse(options: {
    botId: string;
    botName: string;
    systemPrompt: string;
    conversationHistory: MessageDTO[];
    userMessage: string;
    temperature?: number;
  }): Promise<{
    response: string;
    tokensUsed: number;
    confidence: number;
  }> {
    try {
      // Build messages array from conversation history
      const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
        { role: 'system', content: options.systemPrompt },
        ...this.convertHistoryToMessages(options.conversationHistory),
        { role: 'user', content: options.userMessage }
      ];

      const completion = await this.client.chat.completions.create({
        model: this.model,
        messages,
        temperature: options.temperature || 0.7,
        max_tokens: 500,
        top_p: 1,
        frequency_penalty: 0,
        presence_penalty: 0
      });

      const response = completion.choices[0]?.message?.content || 'I apologize, I couldn\'t generate a response.';
      const tokensUsed = completion.usage?.total_tokens || 0;

      // Calculate a simple confidence score
      const confidence = completion.choices[0]?.finish_reason === 'stop' 
        ? 0.9 
        : 0.5;

      return {
        response,
        tokensUsed,
        confidence
      };
    } catch (error: any) {
      console.error('OpenAI API error:', error);

      // Handle rate limits
      if (error?.status === 429) {
        throw new Error('RATE_LIMITED: OpenAI API rate limit exceeded. Please try again later.');
      }

      // Handle authentication errors
      if (error?.status === 401) {
        throw new Error('AUTH_ERROR: Invalid OpenAI API key.');
      }

      throw new Error(`OpenAI API error: ${error.message}`);
    }
  }

  private convertHistoryToMessages(history: MessageDTO[]): OpenAI.Chat.ChatCompletionMessageParam[] {
    return history.map(msg => ({
      role: msg.sender === 'user' ? 'user' as const : 'assistant' as const,
      content: msg.content
    }));
  }

  async generateQuickReplies(options: {
    botId: string;
    context: string;
    count?: number;
  }): Promise<string[]> {
    try {
      const completion = await this.client.chat.completions.create({
        model: this.model,
        messages: [
          {
            role: 'system',
            content: `You are a helpful assistant that generates concise quick reply options for a chatbot. Generate ${options.count || 3} short, actionable quick reply labels based on the context provided.`
          },
          {
            role: 'user',
            content: `Context: ${options.context}\n\nGenerate quick reply options as a JSON array of strings.`
          }
        ],
        temperature: 0.7,
        max_tokens: 150,
        response_format: { type: 'json_object' }
      });

      const content = completion.choices[0]?.message?.content || '[]';
      const parsed = JSON.parse(content);
      return Array.isArray(parsed) ? parsed : parsed.replies || [];
    } catch (error) {
      console.error('Error generating quick replies:', error);
      return ['Tell me more', 'Get started', 'Contact us']; // Fallback
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Part 2: Integrating with AYW Bots

Update ChatbotService

Edit apps/backend/src/services/chatbotService.ts:

import { OpenAIService } from './openaiService';

export class ChatbotService {
  private openai: OpenAIService;

  constructor() {
    this.openai = new OpenAIService();
  }

  async processMessage(
    conversationId: string,
    userId: string | null,
    message: string,
    context: any
  ): Promise<ChatbotProcessResponse> {
    const { botId, conversationHistory, userPreferences } = context;

    // Get bot config for system prompt
    const botConfig = await prisma.botConfig.findUnique({
      where: { botId }
    });

    if (!botConfig || !botConfig.isActive) {
      return this.getFallbackResponse();
    }

    // Use OpenAI for supported bots
    const aiPoweredBots = ['support', 'sales', 'knowledge'];

    if (aiPoweredBots.includes(botId)) {
      return this.processWithAI(botId, botConfig, conversationHistory, message);
    }

    // Keep Welcome Bot logic rule-based
    if (botId === 'welcome') {
      return this.processWelcomeBot(message);
    }

    // Default fallback
    return this.getPlaceholderResponse(botId);
  }

  private async processWithAI(
    botId: string,
    botConfig: any,
    conversationHistory: MessageDTO[],
    userMessage: string
  ): Promise<ChatbotProcessResponse> {
    try {
      // Define system prompts per bot
      const systemPrompts: Record<string, string> = {
        support: `You are a customer support bot for As You Wish (AYW). 
                  Be helpful, empathetic, and concise. 
                  If you can't solve an issue, offer to escalate to a human.
                  Available capabilities: ${botConfig.capabilities.join(', ')}`,

        sales: `You are a sales bot for As You Wish (AYW) chatbot platform.
                Help potential customers understand our offerings.
                Be persuasive but not pushy. Highlight key benefits.
                Available capabilities: ${botConfig.capabilities.join(', ')}`,

        knowledge: `You are a knowledge base bot for AYW.
                    Provide accurate information from our documentation.
                    If you don't know something, say so and suggest contacting support.
                    Available capabilities: ${botConfig.capabilities.join(', ')}`
      };

      const result = await this.openai.generateResponse({
        botId,
        botName: botConfig.name,
        systemPrompt: systemPrompts[botId] || 'You are a helpful assistant.',
        conversationHistory,
        userMessage
      });

      // Generate dynamic quick replies
      const quickReplies = await this.openai.generateQuickReplies({
        botId,
        context: userMessage,
        count: 3
      });

      return {
        botId,
        response: result.response,
        suggestedActions: quickReplies.map(label => ({ label, value: label.toLowerCase().replace(/\s+/g, '_') })),
        escalate: result.confidence < 0.5,
        sentiment: 'neutral'
      };
    } catch (error: any) {
      console.error(`AI processing error for ${botId}:`, error);

      // Fallback to rule-based response
      if (botId === 'support') {
        return this.processSupportBot(userMessage);
      }

      return {
        botId,
        response: 'I apologize, but I\'m having trouble connecting to my AI service. Let me try a different approach.',
        suggestedActions: [
          { label: 'Try Again', value: 'retry' },
          { label: 'Talk to Human', value: 'escalate' }
        ],
        escalate: true,
        sentiment: 'negative'
      };
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Part 3: Adding Token Tracking

Step 1: Create Token Usage Table

Add to apps/backend/prisma/schema.prisma:

model TokenUsage {
  id             String   @id @default(uuid())
  conversationId String   @map("conversation_id")
  botId          String   @map("bot_id")
  tokensUsed     Int      @map("tokens_used")
  model          String
  createdAt      DateTime @default(now()) @map("created_at")

  @@map("token_usage")
}
Enter fullscreen mode Exit fullscreen mode

Run migration:

cd apps/backend
npx prisma migrate dev --name add_token_usage
Enter fullscreen mode Exit fullscreen mode

Step 2: Track Token Usage

Update openaiService.ts:

import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

// Inside generateResponse method, after getting completion:
await prisma.tokenUsage.create({
  data: {
    conversationId: options.conversationId, // You'll need to pass this
    botId: options.botId,
    tokensUsed: tokensUsed,
    model: this.model
  }
});
Enter fullscreen mode Exit fullscreen mode

Part 4: Testing Your Integration

Test 1: Basic AI Response

# Set your token
export ACCESS_TOKEN="your_access_token"

# Send a message to the Support Bot
curl -X POST http://localhost:4000/api/conversations \
  -H "Authorization: Bearer $ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "botId": "support",
    "initialMessage": "How do I reset my password?"
  }'
Enter fullscreen mode Exit fullscreen mode

The response should now come from OpenAI instead of rule-based logic.

Test 2: Check Token Usage

SELECT 
  bot_id,
  model,
  SUM(tokens_used) as total_tokens,
  COUNT(*) as request_count
FROM token_usage
GROUP BY bot_id, model
ORDER BY total_tokens DESC;
Enter fullscreen mode Exit fullscreen mode

Part 5: Cost Optimization

Model Selection

Model Cost (per 1M tokens) Use Case
gpt-4o $5.00 / $15.00 Complex reasoning, high-quality responses
gpt-4o-mini $0.15 / $0.60 Most chatbot use cases (recommended)
gpt-3.5-turbo $0.50 / $1.50 Simple FAQs (legacy)

Token Saving Tips

  1. Limit conversation history: Only send last 10 messages
  2. Use shorter system prompts: Be concise in instructions
  3. Set max_tokens: Cap response length
  4. Cache common responses: For FAQs, use rule-based first

Example - limiting history:

// Only use last 10 messages for context
const recentHistory = conversationHistory.slice(-10);
Enter fullscreen mode Exit fullscreen mode

Part 6: Error Handling & Fallbacks

Rate Limit Handling

// In chatbotService.ts
private async processWithAI(...) {
  try {
    // ... OpenAI call
  } catch (error: any) {
    if (error.message.includes('RATE_LIMITED')) {
      // Wait and retry (exponential backoff)
      await new Promise(resolve => setTimeout(resolve, 2000));
      return this.processWithAI(botId, botConfig, conversationHistory, userMessage);
    }

    // Other errors fall back to rule-based
    return this.getFallbackResponse();
  }
}
Enter fullscreen mode Exit fullscreen mode

Circuit Breaker Pattern

For production, implement a circuit breaker to stop calling OpenAI when it's consistently failing:

class CircuitBreaker {
  private failures = 0;
  private lastFailureTime = 0;
  private state: 'closed' | 'open' | 'half-open' = 'closed';

  async call(fn: () => Promise<any>) {
    if (this.state === 'open') {
      if (Date.now() - this.lastFailureTime > 60000) {
        this.state = 'half-open';
      } else {
        throw new Error('Circuit breaker is open');
      }
    }

    try {
      const result = await fn();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      throw error;
    }
  }

  private onSuccess() {
    this.failures = 0;
    this.state = 'closed';
  }

  private onFailure() {
    this.failures++;
    this.lastFailureTime = Date.now();
    if (this.failures >= 5) {
      this.state = 'open';
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

"Invalid API key" error?

  • Verify OPENAI_API_KEY in your .env file
  • Check the key at OpenAI Platform
  • Ensure no extra spaces or quotes around the key

Responses taking too long?

  • Check your max_tokens setting (lower = faster)
  • Use gpt-4o-mini instead of gpt-4o
  • Implement streaming responses (see OpenAI docs)

High costs?

  • Monitor token usage in the token_usage table
  • Set up billing alerts in OpenAI dashboard
  • Use shorter system prompts and limit history

Bot not using AI?

  • Check that the botId is in aiPoweredBots array
  • Verify bot is is_active = true in database
  • Check backend logs for OpenAI API errors

Next Steps

  1. Add streaming responses - Use Server-Sent Events (SSE) for real-time AI responses
  2. Implement function calling - Let OpenAI call your APIs directly
  3. Add sentiment analysis - Use AI to detect user frustration
  4. Build a prompt library - Create and test different system prompts

Resources


Integration Complete! 🎉

Your AYW bots are now powered by OpenAI. Monitor your token usage and costs, and iterate on your system prompts to improve response quality.

Questions? Drop them in the comments below! 👇


Tags: #ai #openai #nodejs #tutorial #integration #ayw

Series: AYW Developer Tutorials (Part 3 of 6)

Top comments (0)