This system enables an AI chatbot that checks the database first before calling GPT-4o. It supports multiple projects, storing AI responses separately for each project. Additionally, AI-generated responses are saved in the background to improve future interactions.
Features
✅ Database First: If a response is found with 80% similarity, return it immediately.
✅ GPT-4o Fallback: If no match is found, call GPT-4o and return the response.
✅ Background Job: Store new AI responses in the background using Laravel Queues.
✅ Multi-Project Support: Each AI response is linked to a specific project.
✅ Rate Limiting: Prevents spam and abuse.
Database Design
conversations
Table (Stores Trained AI Data)
Column | Type | Description |
---|---|---|
id |
bigint |
Primary Key |
project |
string |
Project name |
prompt |
text |
User query (Unique) |
response |
text |
AI-generated response |
created_at |
timestamp |
Timestamp |
chat_logs
Table (Logs Each Chat Interaction)
Column | Type | Description |
---|---|---|
id |
bigint |
Primary Key |
project |
string |
Project name |
prompt |
text |
User query |
response |
text |
Response sent to user |
source |
enum(database, AI) |
Whether response came from DB or AI |
created_at |
timestamp |
Timestamp |
API Implementation
Request Format
{ "project": "job_portal", "prompt": "How to create a job listing?" }
Response (From Database)
{ "source": "database", "response": "You can create a job listing by..." }
Response (From GPT-4o, If No Match)
{ "source": "AI", "response": "To create a job listing, follow these steps..." }
Backend Implementation
🔹 AIController.php
Handles chat request, database check, GPT-4o API call, and background job dispatch.
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Conversation;
use App\Models\ChatLog;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\RateLimiter;
use App\Jobs\StoreTrainedData;
class AIController extends Controller
{
public function chat(Request $request)
{
$request->validate([
'project' => 'required|string',
'prompt' => 'required|string'
]);
$project = $request->input('project');
$prompt = $request->input('prompt');
// **Rate Limiting**
if (RateLimiter::tooManyAttempts('chat:' . $request->ip(), 10)) {
return response()->json([
'error' => 'Too many requests. Please wait a while before trying again.'
], 429);
}
RateLimiter::hit('chat:' . $request->ip(), 60);
// **Check Database for Similar Prompt**
$similarConversation = $this->findSimilarPrompt($project, $prompt);
if ($similarConversation) {
ChatLog::create([
'project' => $project,
'prompt' => $prompt,
'response' => $similarConversation->response,
'source' => 'database'
]);
return response()->json([
'source' => 'database',
'response' => $similarConversation->response
]);
}
// **Call GPT-4o**
$aiResponse = $this->callGPT4o($prompt);
// **Save Response in Background**
StoreTrainedData::dispatch($project, $prompt, $aiResponse)->onQueue('default');
// **Log Chat Response**
ChatLog::create([
'project' => $project,
'prompt' => $prompt,
'response' => $aiResponse,
'source' => 'AI'
]);
return response()->json([
'source' => 'AI',
'response' => $aiResponse
]);
}
private function findSimilarPrompt($project, $prompt)
{
$similarPrompts = Conversation::where('project', $project)
->whereRaw("MATCH(prompt) AGAINST(? IN NATURAL LANGUAGE MODE)", [$prompt])
->get();
foreach ($similarPrompts as $conv) {
$similarity = $this->calculateSimilarity($prompt, $conv->prompt);
if ($similarity >= 80) {
return $conv;
}
}
return null;
}
private function calculateSimilarity($input1, $input2)
{
similar_text($input1, $input2, $percent);
return $percent;
}
private function callGPT4o($prompt)
{
$response = Http::withHeaders([
'Authorization' => 'Bearer ' . env('OPENAI_API_KEY'),
])->post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4o',
'messages' => [["role" => "user", "content" => $prompt]],
'temperature' => 0.7
]);
return $response->json()['choices'][0]['message']['content'] ?? 'No response';
}
}
🔹 StoreTrainedData.php (Background Job)
Stores AI responses in the background after returning the response.
namespace App\Jobs;
use App\Models\Conversation;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
class StoreTrainedData implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
protected $project;
protected $prompt;
protected $response;
public function __construct($project, $prompt, $response)
{
$this->project = $project;
$this->prompt = $prompt;
$this->response = $response;
}
public function handle()
{
Conversation::create([
'project' => $this->project,
'prompt' => $this->prompt,
'response' => $this->response
]);
}
}
Setup Laravel Queues
1. Set Up Database Queue
Run:
php artisan queue:table php artisan migrate
Update .env
:
env
QUEUE_CONNECTION=database
Start queue worker:
sh
php artisan queue:work
Final Workflow
1️⃣ User sends a chat request with project name.
2️⃣ API checks the database for that project (returns response if found).
3️⃣ If no match, GPT-4o is called, and the response is sent immediately.
4️⃣ The response is saved in the background for future use.
5️⃣ Future queries for that project are answered from the database.
Benefits
✅ Faster Responses (Database First, GPT-4o Second)
✅ Project-Based Data Storage (Multiple Projects Supported)
✅ Background Processing (No User Delay)
✅ Scalable (Handles Large Datasets Efficiently)
✅ Self-Learning (AI Gets Smarter Over Time)
Top comments (0)