This article contains affiliate links. I may earn a commission at no extra cost to you.
title: "Build AI-Powered Vue.js Apps: Integrate OpenAI API with Composition API and Pinia"
published: true
description: "Learn to build production-ready AI features in Vue 3 with proper state management, error handling, and cost optimization"
tags: vue, ai, openai, javascript, tutorial
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vue-ai-integration.png
AI integration is becoming essential for modern web applications, but many tutorials skip the production-ready details. Today, we'll build a smart text summarizer that demonstrates proper Vue 3 patterns, state management with Pinia, and real-world considerations like rate limiting and cost optimization.
What We're Building
Our text summarizer will:
- Accept long-form text input
- Generate AI-powered summaries using OpenAI's API
- Cache results to minimize API costs
- Handle errors gracefully with retry logic
- Track token usage for cost monitoring
- Implement rate limiting for production use
Project Setup
First, let's create a Vue 3 project with the necessary dependencies:
npm create vue@latest ai-text-summarizer
cd ai-text-summarizer
npm install pinia openai axios
Select TypeScript and Pinia when prompted during setup.
Environment Configuration
Create a .env file for your OpenAI API key:
VITE_OPENAI_API_KEY=your_openai_api_key_here
VITE_MAX_REQUESTS_PER_MINUTE=10
VITE_CACHE_DURATION=3600000
Security Note: In production, never expose API keys in client-side code. Use a backend proxy instead. For this tutorial, we'll use client-side integration for simplicity.
Setting Up the OpenAI Service
Create src/services/openai.js:
import OpenAI from 'openai'
class OpenAIService {
constructor() {
this.client = new OpenAI({
apiKey: import.meta.env.VITE_OPENAI_API_KEY,
dangerouslyAllowBrowser: true // Only for demo - use backend in production
})
this.requestQueue = []
this.maxRequestsPerMinute = parseInt(import.meta.env.VITE_MAX_REQUESTS_PER_MINUTE) || 10
}
async summarizeText(text, maxTokens = 150) {
await this.enforceRateLimit()
try {
const response = await this.client.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{
role: 'system',
content: 'You are a helpful assistant that creates concise, accurate summaries of text content.'
},
{
role: 'user',
content: `Please summarize the following text in ${maxTokens} tokens or less:\n\n${text}`
}
],
max_tokens: maxTokens,
temperature: 0.3
})
return {
summary: response.choices[0].message.content,
tokensUsed: response.usage.total_tokens,
cost: this.calculateCost(response.usage.total_tokens)
}
} catch (error) {
throw new Error(`OpenAI API Error: ${error.message}`)
}
}
async enforceRateLimit() {
const now = Date.now()
this.requestQueue = this.requestQueue.filter(time => now - time < 60000)
if (this.requestQueue.length >= this.maxRequestsPerMinute) {
const waitTime = 60000 - (now - this.requestQueue[0])
await new Promise(resolve => setTimeout(resolve, waitTime))
}
this.requestQueue.push(now)
}
calculateCost(tokens) {
// GPT-3.5-turbo pricing: $0.002 per 1K tokens
return (tokens / 1000) * 0.002
}
}
export default new OpenAIService()
Pinia Store for State Management
Create src/stores/summarizer.js:
import { defineStore } from 'pinia'
import { ref, computed } from 'vue'
import openaiService from '../services/openai'
export const useSummarizerStore = defineStore('summarizer', () => {
// State
const summaries = ref(new Map())
const isLoading = ref(false)
const error = ref(null)
const totalTokensUsed = ref(0)
const totalCost = ref(0)
// Cache configuration
const cacheTimeout = parseInt(import.meta.env.VITE_CACHE_DURATION) || 3600000 // 1 hour
// Computed
const hasError = computed(() => error.value !== null)
const formattedCost = computed(() => `$${totalCost.value.toFixed(4)}`)
// Actions
async function summarizeText(text, options = {}) {
const cacheKey = generateCacheKey(text, options)
// Check cache first
const cached = getCachedSummary(cacheKey)
if (cached) {
return cached
}
isLoading.value = true
error.value = null
try {
const result = await openaiService.summarizeText(text, options.maxTokens)
// Cache the result
summaries.value.set(cacheKey, {
...result,
timestamp: Date.now(),
originalText: text.substring(0, 100) + '...'
})
// Update usage statistics
totalTokensUsed.value += result.tokensUsed
totalCost.value += result.cost
return result
} catch (err) {
error.value = err.message
throw err
} finally {
isLoading.value = false
}
}
function getCachedSummary(cacheKey) {
const cached = summaries.value.get(cacheKey)
if (cached && Date.now() - cached.timestamp < cacheTimeout) {
return cached
}
if (cached) {
summaries.value.delete(cacheKey)
}
return null
}
function generateCacheKey(text, options) {
const textHash = btoa(text.substring(0, 500)).replace(/[^a-zA-Z0-9]/g, '')
return `${textHash}_${options.maxTokens || 150}`
}
function clearError() {
error.value = null
}
function clearCache() {
summaries.value.clear()
}
return {
// State
summaries,
isLoading,
error,
totalTokensUsed,
totalCost,
// Computed
hasError,
formattedCost,
// Actions
summarizeText,
clearError,
clearCache
}
})
Building the Summarizer Component
Create src/components/TextSummarizer.vue:
<template>
<div class="text-summarizer">
<div class="input-section">
<label for="text-input">Enter text to summarize:</label>
<textarea
id="text-input"
v-model="inputText"
placeholder="Paste your text here..."
:disabled="isLoading"
rows="8"
/>
<div class="controls">
<label>
Max summary length:
<select v-model="maxTokens" :disabled="isLoading">
<option value="100">Short (100 tokens)</option>
<option value="150">Medium (150 tokens)</option>
<option value="250">Long (250 tokens)</option>
</select>
</label>
<button
@click="handleSummarize"
:disabled="!canSummarize"
class="summarize-btn"
>
{{ isLoading ? 'Summarizing...' : 'Summarize' }}
</button>
</div>
</div>
<div v-if="hasError" class="error-message">
<p>{{ error }}</p>
<button @click="clearError" class="retry-btn">Dismiss</button>
</div>
<div v-if="currentSummary" class="summary-section">
<h3>Summary</h3>
<div class="summary-content">
<p>{{ currentSummary.summary }}</p>
<div class="summary-meta">
<span>Tokens used: {{ currentSummary.tokensUsed }}</span>
<span>Cost: ${{ currentSummary.cost.toFixed(4) }}</span>
</div>
</div>
</div>
<div class="usage-stats">
<h4>Session Statistics</h4>
<p>Total tokens used: {{ totalTokensUsed }}</p>
<p>Total cost: {{ formattedCost }}</p>
<button @click="clearCache" class="clear-btn">Clear Cache</button>
</div>
</div>
</template>
<script setup>
import { ref, computed } from 'vue'
import { useSummarizerStore } from '../stores/summarizer'
const store = useSummarizerStore()
// Local state
const inputText = ref('')
const maxTokens = ref(150)
const currentSummary = ref(null)
// Computed properties
const canSummarize = computed(() => {
return inputText.value.trim().length > 50 && !store.isLoading
})
const { isLoading, error, hasError, totalTokensUsed, formattedCost } = store
// Methods
async function handleSummarize() {
if (!canSummarize.value) return
try {
currentSummary.value = await store.summarizeText(inputText.value, {
maxTokens: parseInt(maxTokens.value)
})
} catch (err) {
console.error('Summarization failed:', err)
}
}
function clearError() {
store.clearError()
}
function clearCache() {
store.clearCache()
currentSummary.value = null
}
</script>
<style scoped>
.text-summarizer {
max-width: 800px;
margin: 0 auto;
padding: 2rem;
}
.input-section {
margin-bottom: 2rem;
}
textarea {
width: 100%;
padding: 1rem;
border: 2px solid #e1e5e9;
border-radius: 8px;
font-family: inherit;
resize: vertical;
}
.controls {
display: flex;
justify-content: space-between;
align-items: center;
margin-top: 1rem;
gap: 1rem;
}
.summarize-btn {
background: #007acc;
color: white;
border: none;
padding: 0.75rem 1.5rem;
border-radius: 6px;
cursor: pointer;
font-weight: 600;
}
.summarize-btn:disabled {
background: #ccc;
cursor: not-allowed;
}
.error-message {
background: #fee;
border: 1px solid #fcc;
padding: 1rem;
border-radius: 6px;
margin-bottom: 1rem;
}
.summary-section {
background: #f8f9fa;
padding: 1.5rem;
border-radius: 8px;
margin-bottom: 2rem;
}
.summary-meta {
display: flex;
gap: 1rem;
margin-top: 1rem;
font-size: 0.9rem;
color: #666;
}
.usage-stats {
border-top: 1px solid #e1e5e9;
padding-top: 1rem;
font-size: 0.9rem;
}
</style>
Production Deployment Considerations
For production deployment, implement these security measures:
1. Backend API Proxy
Create a backend endpoint that handles OpenAI requests:
// backend/routes/ai.js
app.post('/api/summarize', async (req, res) => {
const { text, maxTokens } = req.body
// Add authentication, rate limiting, input validation
if (!isValidUser(req)) {
return res.status(401).json({ error: 'Unauthorized' })
}
try {
const result = await openaiService.summarizeText(text, maxTokens)
res.json(result)
} catch (error) {
res.status(500).json({ error: error.message })
}
})
2. Environment Variables for Production
Update your deployment configuration:
# Production .env
VITE_API_BASE_URL=https://your-api.com
VITE_MAX_REQUESTS_PER_MINUTE=5
VITE_CACHE_DURATION=7200000
3. Error Boundary and Monitoring
Add error tracking and monitoring:
// Add to your main.js
import * as Sentry from '@sentry/vue'
Sentry.init({
app,
dsn: 'your-sentry-dsn'
})
Cost Optimization Tips
- Implement aggressive caching - Cache results for identical inputs
- Use shorter prompts - Reduce token usage in system messages
- Batch requests - Process multiple summaries together when possible
- Set token limits - Prevent unexpectedly long responses
- Monitor usage - Track costs and set alerts
Conclusion
We've built a production-ready AI-powered Vue.js application that demonstrates:
- Proper state management with Pinia
- Error handling and loading states
- Cost optimization through caching
- Rate limiting for API protection
- Security considerations for deployment
The key to successful AI integration is treating it like any other external service - with proper error handling, caching, and monitoring. Start with simple features like text summarization, then expand to more complex AI capabilities as your application grows.
Remember: always proxy AI API calls through your backend in production to protect API keys and implement proper rate limiting. The client-side approach shown here is for development and learning purposes only.
Happy coding! 🚀
Tools mentioned:
Top comments (0)