DEV Community

Usenmfon
Usenmfon

Posted on

Laravel AI Agent That Chats with Telex.im

πŸ€– Building a Laravel AI Agent That Chats with Telex.im Using Neuron AI + Gemini 2.5 Flash


🌟 Introduction

Imagine chatting with an intelligent Laravel agent right inside your Telex.im workspace β€” one that can explain, generate, or fix code instantly.

That’s exactly what I built: Dev Assist, a Laravel-based AI assistant powered by Neuron AI and Google Gemini 2.5 Flash, integrated with Telex.im using a workflow node (a2a/mastra-a2a-node) and publicly accessible via Expose and Render.

This setup creates a complete conversational loop between Telex and Laravel, driven by cutting-edge AI.


🧠 What Powers Dev Assist

Component Purpose
Laravel Backend logic, routing, and message orchestration
Neuron AI (neuron-core/neuron-ai) AI framework bridging Laravel with Gemini API
Gemini 2.5 Flash The reasoning and code-generation model
Telex.im The collaboration environment where the agent lives
Expose + Render Publicly expose and host the Laravel endpoint

βš™οΈ Step 1: Setting Up Laravel + Neuron AI

Install the Neuron AI package (core version):

composer require neuron-core/neuron-ai
Enter fullscreen mode Exit fullscreen mode

Then configure your .env file with Gemini credentials:

NEURON_PROVIDER=gemini
GEMINI_API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-2.5-flash

🧩 neuron-core acts as an abstraction layer, letting you switch between OpenAI, Gemini, and other providers easily.

🧩 Step 2: Build the AI Service in Laravel

Here’s the DevAssistService class that handles intent detection, Neuron AI communication, and Telex responses:

<?php

namespace App\Services;

use App\Neuron\DevAssistAgent;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use NeuronAI\Chat\Messages\UserMessage;

class DevAssistService
{
    public function detectIntent(string $message): string
    {
        $msg = strtolower($message);
        return match (true) {
            str_contains($msg, 'explain') => 'explain_code',
            str_contains($msg, 'generate') => 'generate_code',
            str_contains($msg, 'fix') => 'fix_code',
            default => 'general',
        };
    }

    public function processMessage(string $intent, string $message): string
    {
        $prefixed = match ($intent) {
            'explain_code' => "[EXPLAIN]\n{$message}",
            'generate_code' => "[GENERATE]\n{$message}",
            'fix_code' => "[FIX]\n{$message}",
            default => $message,
        };

        try {
            $agent = DevAssistAgent::make();
            $result = $agent->chat(new UserMessage($prefixed));
            return $result->content ?? 'No response received.';
        } catch (\Throwable $e) {
            Log::error('Agent error: '.$e->getMessage());
            return "Sorry β€” the Dev Assist agent failed to respond. Try again later.";
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

🧠 What Makes Gemini 2.5 Flash Special

Built for fast inference and low latency

Strong at code reasoning and language understanding

Works seamlessly through Neuron AI without changing your Laravel code

Supports both short-form chat and structured JSON responses

Top comments (0)