DEV Community

Cover image for What Laravel 13 Actually Changes for AI Development
Dewald Hugo
Dewald Hugo

Posted on • Originally published at origin-main.com

What Laravel 13 Actually Changes for AI Development

Laravel 13 dropped a production-stable AI SDK on release day and nobody's talking about what it actually replaces. Here's the full breakdown.

Laravel 13 launched on March 17, 2026 , and Taylor Otwell kept his Laracon EU promise: zero breaking changes, a clean upgrade from Laravel 12, and one headline shift that resets the default approach to AI in PHP. The Laravel 13 AI SDK is now production-stable, first-party, and catalogued inside the official Laravel documentation as a core concern — not a side-effect of a community package you bolt on after you’ve already made architectural decisions you’ll later regret.

This is not a release you skim for changelog bullet points and move on. For anyone building AI-powered features on Laravel, this release fundamentally changes how you should be structuring that work. Let’s go through it properly.

Why Laravel 13 Is an AI-First Release

Laravel 13 continues Laravel’s annual release cadence with a focus on AI-native workflows, stronger defaults, and more expressive developer APIs. The official framing is “minimal breaking changes,” and that’s accurate — but undersells it. The infrastructure laid here directly shapes whether your AI features remain maintainable at scale or turn into the kind of controller-level spaghetti you spend the next two years paying down.

The Laravel AI SDK moves from beta to production-stable on the same day as Laravel 13. It is included as a first-party package and gives you a single, provider-agnostic interface for text generation, tool-calling agents, image creation, audio synthesis, and embedding generation. The SDK handles retry logic, error normalisation, and queue integration behind the scenes. You get all of that without writing a custom abstraction layer, and without coupling your application to a specific provider’s SDK contract.

That last point deserves more weight than the release notes give it.

The Laravel AI SDK: What It Actually Gives You

Text Generation and Agents

The simplest entry point looks like this:

use App\Ai\Agents\SalesCoach;
$response = SalesCoach::make()->prompt('Analyse this sales transcript...');
return (string) $response;
Enter fullscreen mode Exit fullscreen mode

With the AI SDK, you can build provider-agnostic AI features while keeping a consistent, Laravel-native developer experience. That means your SalesCoach agent does not care whether it’s backed by OpenAI, Anthropic, or Google Gemini. You wire the provider in config/ai.php and the agent contract stays unchanged.

The default models used for text, images, audio, transcription, and embeddings are now configurable in your application’s config/ai.php file. This gives you granular control over the exact models you’d like to use if you don’t want to rely on the package defaults. A minimal configuration targeting Anthropic looks like this:

// config/ai.php
return [
    'models' => [
        'text' => [
            'default'  => env('AI_TEXT_MODEL', 'claude-sonnet-4-6'),
            'cheapest' => 'claude-haiku-4-5-20251001',
            'smartest' => 'claude-opus-4-6',
        ],
    ],
];
Enter fullscreen mode Exit fullscreen mode

You are not hardcoding a model string into a service class. You are not parsing a .env file in three different controllers. One config file governs the whole application. If you need to roll back a model mid-incident, it’s one value and a php artisan config:cache.

Images and Audio

For visual generation use cases, the SDK offers a clean API for creating images from plain-language prompts. For voice experiences, you can synthesize natural-sounding audio from text for assistants, narrations, and accessibility features.

use Laravel\Ai\Image;
use Laravel\Ai\Audio;
// Image generation
$image = Image::of('A product shot of a minimalist desk lamp')->generate();
$rawContent = (string) $image;
// Audio synthesis
$audio = Audio::of('Your order has been confirmed.')->generate();
$rawContent = (string) $audio;
Enter fullscreen mode Exit fullscreen mode

The fluent API is consistent across modalities. That consistency is not accidental. It means a developer who’s only worked with text generation can pick up image or audio synthesis in minutes — no mental context switch, no separate SDK documentation to parse.

Embeddings and the Str Helper

Here’s where it gets genuinely interesting. Embedding generation is wired directly into Laravel’s Strhelper:

use Illuminate\Support\Str;
$vector = Str::of('Napa Valley has exceptional Cabernet Sauvignon.')->toEmbeddings();
Enter fullscreen mode Exit fullscreen mode

That’s not a convenience method. That’s the framework signalling that embeddings are now a first-class data type. You’re not reaching for a one-off utility class — you’re calling a helper that sits alongside Str::slug() and Str::limit() as a standard part of the toolkit.

[Architect’s Note] The fact that toEmbeddings() lives on the Strhelper rather than a dedicated Embeddingfacade is a deliberate design choice. It keeps embedding generation composable with the rest of your string manipulation pipeline. Chain it after a limit() call to trim tokens before you generate — or after markdown() to clean formatting before vectorising. Think of it as the framework nudging you toward preprocessing being part of the same expression.

Native Vector Search: Semantic Queries Without a Search Engine Subscription

Laravel 13 deepens its semantic search story with native vector query support, embedding workflows, and related APIs. These features make it straightforward to build AI-powered search experiences using PostgreSQL and pgvector, including similarity search against embeddings generated directly from strings.

The query builder extension looks like this:

$documents = DB::table('documents')
    ->whereVectorSimilarTo('embedding', 'Best restaurants in Cape Town')
    ->limit(10)
    ->get();
Enter fullscreen mode Exit fullscreen mode

Under the hood, this generates a pgvector-compatible cosine similarity query against your Postgres database. No Elasticsearch cluster. No Typesense subscription. No Algolia bill. For a significant number of use cases — internal knowledge bases, product catalogue search, customer support retrieval — Postgres with pgvector is more than sufficient, and the operational overhead is zero if you’re already on Postgres.

The migration for the vector column uses a new column type:

Schema::create('documents', function (Blueprint $table) {
    $table->id();
    $table->text('content');
    $table->vector('embedding', 1536); // dimension matches your model's output
    $table->timestamps();
});
Enter fullscreen mode Exit fullscreen mode

A realistic ingestion pipeline that generates and stores embeddings looks like this:

use Illuminate\Support\Str;
use App\Models\Document;
class IngestDocumentJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    public function __construct(private readonly string $content) {}
    public function handle(): void
    {
        $embedding = Str::of($this->content)->toEmbeddings();
        Document::create([
            'content'   => $this->content,
            'embedding' => $embedding,
        ]);
    }
}
Enter fullscreen mode Exit fullscreen mode

Dispatch it from a controller, and your embedding pipeline is queued, retried on failure, and observable via Horizon — exactly the same way you’d treat any other background job.

[Production Pitfall] Embedding API calls are slow — typically 300–800ms depending on payload size and provider latency. Do not generate embeddings synchronously in a request lifecycle. Every embedding generation call belongs in a queued job, period. Under load, synchronous embedding generation will saturate your PHP-FPM pool faster than almost anything else. We’ve seen this exact pattern cause cascading timeouts in staging under traffic replay. Don’t let it reach production.

Provider Agnosticism: What It Means Practically

Switch providers by changing one config value. Teams that offer professional Laravel development services can now build AI-powered features inside a standard Laravel project — no custom abstraction layers, no third-party SDK gymnastics required.

This matters more than it reads. Prior to the AI SDK going stable, the common pattern was to inject an OpenAI client through the Service Container, write a thin wrapper around it, and pray that the wrapper held when you inevitably needed to test responses or swap a model mid-sprint. Most teams ended up with one of three things: a god-service that knew too much, a leaky abstraction that only worked for their current provider, or no abstraction at all and raw Http::post() calls in controllers.

The AI SDK replaces all three anti-patterns with a single, testable, swappable interface that the framework itself maintains. If the underlying provider deprecates an API method, the fix lands in a framework update — not in your codebase.

Here’s how you’d swap to a different provider for a specific agent without touching the rest of the application:

use Laravel\Ai\Facades\Ai;
use Laravel\Ai\Enums\Provider;
$response = Ai::using(Provider::Anthropic)
    ->withModel('claude-opus-4-6')
    ->prompt('Summarise this legal document...');
Enter fullscreen mode Exit fullscreen mode

That using() call overrides the config/ai.php default for this invocation only. You can do per-request provider overrides without touching global configuration. That’s useful in multi-tenant applications where different plans map to different model tiers.

If you’re already running a custom abstraction over the OpenAI PHP SDK, we wrote about how to migrate that pattern properly — including how to handle cost visibility and telemetry — in Production-Grade AI Architecture in Laravel: Contracts, Governance & Telemetry. The AI SDK doesn’t replace the need for those architectural decisions; it gives you a better foundation to implement them on.

Tool-Calling Agents: Building Agentic Workflows Natively

The AI SDK’s agent support deserves its own section. Tool-calling — the mechanism by which an LLM decides to invoke a function in your application — is the primitive that makes agentic workflows possible.

A basic agent with tools looks like this:

namespace App\Ai\Agents;
use Laravel\Ai\Agent;
use Laravel\Ai\Attributes\Tool;
class OrderAssistant extends Agent
{
    protected string $model = 'claude-sonnet-4-6';
    protected string $instructions = 'You are a helpful order management assistant.';
    #[Tool('Look up the status of an order by ID')]
    public function getOrderStatus(int $orderId): string
    {
        $order = Order::findOrFail($orderId);
        return "Order #{$orderId} is currently {$order->status}.";
    }
    #[Tool('List all open orders for a given customer')]
    public function listOpenOrders(int $customerId): array
    {
        return Order::where('customer_id', $customerId)
            ->where('status', 'open')
            ->pluck('id')
            ->toArray();
    }
}
Enter fullscreen mode Exit fullscreen mode

The #[Tool] attribute wires the method directly into the model’s tool-calling interface. The SDK handles the round-trip: the model sees the tool definition, decides to call it, the SDK invokes your method, and the result is injected back into the conversation automatically.

Notice that getOrderStatusand listOpenOrdersare standard Eloquent queries. There is nothing AI-specific about the business logic inside the tools. This is the correct separation. Your tools are Laravel code. The AI SDK manages the protocol layer between your code and the model.

[Edge Case Alert] Tool-calling agents can enter infinite loops if the model repeatedly decides to call the same tool with the same arguments — this happens when the tool’s return value doesn’t advance the conversation’s goal. Always set a maxStepslimit on your agents and handle MaxStepsExceededExceptionexplicitly. Defaulting to unlimited steps in production is asking for a runaway API bill.

$response = OrderAssistant::make()
    ->maxSteps(10)
    ->prompt("What's the status of order 99?");
Enter fullscreen mode Exit fullscreen mode

If you’re building more sophisticated multi-step agentic workflows and need schema validation on tool outputs — which you almost certainly will once you’re past demos — the Hardening Laravel Agentic Workflows: Schema Validation Against LLM Hallucinations guide covers exactly that.

Queue Integration and config/ai.php as Operational Infrastructure

One under-discussed aspect of the AI SDK is how it integrates with Laravel’s Queue system. Transcription now supports timeouts, giving you better control in production workloads and preventing long-running requests from tying up workers.

$transcript = Transcription::fromPath('./podcast.mp3')
    ->timeout(240)
    ->generate(Lab::ElevenLabs);
Enter fullscreen mode Exit fullscreen mode

That timeout() call maps directly to the queue worker’s job timeout. If you’re already familiar with $timeout on job classes, this is the same mechanism — now surfaced at the SDK level so you don’t have to know to set it yourself.

Pair this with Laravel 13’s new Queue::route() method for clean queue topology:

// app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\Queue;
Queue::route([
    IngestDocumentJob::class   => 'embeddings',
    GenerateImageJob::class    => 'ai-images',
    TranscribeAudioJob::class  => 'transcription',
]);
Enter fullscreen mode Exit fullscreen mode

AI jobs should never share a queue with your standard application jobs. Embedding generation and image synthesis calls have entirely different latency profiles and retry semantics than sending a welcome email. The new Queue::route() method lets you define which queue and connection each job class uses from a single location in a service provider. Previously, teams either set queue properties on each job class or repeated the configuration at every dispatch site.

Centralise that in a Service Provider and you’ve got one place to retune queue topology when your AI workload changes.

Error Handling: What the SDK Does, and What It Doesn’t

The AI SDK handles retry logic and error normalisation internally, but that does not mean you write no error handling. It means you handle fewer low-level concerns. The contracts you still own:

use Laravel\Ai\Exceptions\AiProviderException;
use Laravel\Ai\Exceptions\RateLimitException;
use Laravel\Ai\Exceptions\ContentFilterException;
try {
    $response = SalesCoach::make()->prompt($userInput);
} catch (RateLimitException $e) {
    // The SDK exhausted its internal retry budget — back off and re-queue
    InvokeSalesCoachJob::dispatch($userInput)->delay(now()->addSeconds(60));
} catch (ContentFilterException $e) {
    // The provider flagged the input — log it, don't retry
    Log::warning('Content filter triggered', ['input_hash' => hash('sha256', $userInput)]);
    return response()->json(['error' => 'Input could not be processed.'], 422);
} catch (AiProviderException $e) {
    // Unknown provider error — log full context, fail gracefully
    Log::error('AI provider failure', ['message' => $e->getMessage()]);
    return response()->json(['error' => 'Service temporarily unavailable.'], 503);
}
Enter fullscreen mode Exit fullscreen mode

The SDK’s internal retry logic handles transient 5xx errors and network timeouts. RateLimitException is thrown when the SDK’s retry budget is exhausted — at that point, you need application-level backpressure, not another retry. Handle these as distinct failure modes. They are.

[Word to the Wise] Logging $e->getMessage() on a provider exception often includes the raw prompt in the message string depending on the provider. Sanitise before logging in any context where the prompt may contain user PII. This is not a theoretical concern — it’s the kind of thing that shows up in a GDPR audit.

The Bigger Picture: Laravel’s AI Roadmap Signal

The Laravel AI SDK going stable on the same day as Laravel 13 is a deliberate signal: the framework’s roadmap is now AI-first. The docs now include a dedicated AI section with entries for the AI SDK, MCP integration, and Laravel Boost — Taylor’s AI-assisted development tooling. The installation docs now include a dedicated “Laravel and AI” section, and existing apps are pointed to Laravel Boost for AI-assisted development workflows.

Read that as a directional commitment. The primitives shipped in Laravel 13 — provider-agnostic text generation, first-class embeddings, native vector queries, tool-calling agents — are the foundation. What gets built on top of them in 13.x point releases and Laravel 14 will assume they exist. If you’re planning AI features over the next 12 months and you’re not on Laravel 13, you’re doing that planning on a weaker foundation than the framework now offers.

The upgrade path is genuinely low-friction for most apps. The official release notes emphasise minimal breaking changes, and the official upgrade guide estimates about 10 minutes for many applications. The real risk is in custom cache behaviour, request forgery edge cases, and hand-rolled framework integrations.

If you’re running a custom AI integration today — your own OpenAI service class, your own retry decorator, your own token-counting middleware — the migration question is not “should I switch to the AI SDK?” It’s “how quickly can I make the switch before the ecosystem diverges enough that porting gets expensive?” For token tracking and rate limiting specifically, see Laravel AI Middleware: Token Tracking & Rate Limiting — that pattern ports cleanly onto the SDK’s event hooks.

Upgrade Checklist: AI-Specific Steps for Laravel 13

Before you run composer require laravel/framework:^13.0, address the following:

1. PHP version. Laravel 13 requires PHP 8.3 as a minimum. PHP 8.5, released November 2025, is also supported by Laravel 13, bringing further JIT improvements and native URI handling. Check your server runtime before anything else.

2. Remove redundant provider SDKs. If you’re injecting openai-php/client or anthropic/anthropic-sdk-php directly via the Service Container, evaluate whether the AI SDK replaces that dependency entirely. In most cases, it does. Keeping both creates competing abstraction layers.

3. Audit your queue configuration. With Queue::route() now available, centralise your AI job routing in AppServiceProvider. If you’re currently setting public string $queue = 'ai' on individual job classes, that works — but Queue::route() is cleaner and easier to update without touching job classes.

4. Generate and store your config/ai.php. The SDK expects this file. Publish it with:

php artisan vendor:publish --tag=ai-config
Enter fullscreen mode Exit fullscreen mode

Then pin your model names explicitly rather than relying on package defaults. Model defaults change between SDK minor releases. You want to control when your application picks up a new default model — not discover that it happened because a patch updated the SDK.

5. Add pgvector to your Postgres instance if you plan to use whereVectorSimilarTo. The extension is available in RDS, Supabase, and Neon without additional configuration. On self-managed Postgres, install it with:

//sql
CREATE EXTENSION IF NOT EXISTS vector;
Enter fullscreen mode Exit fullscreen mode

6. Test your request forgery protection. Laravel 13 formalises PreventRequestForgerywith origin-aware verification on top of token-based CSRF. If you have custom CSRF handling or API routes with unusual origin configurations, test them explicitly before deploying.

What’s Not Yet There

Let’s be direct about the gaps.

Streaming responses — where the model outputs tokens progressively rather than in a single payload — are not yet a first-class concern in the AI SDK’s stable release. For chat interfaces that need token streaming, you’ll still be reaching for provider-specific solutions or custom SSE handling. Watch the 13.x changelog; this is an obvious next primitive.

Multi-modal input — sending images or audio to the model rather than from it — is also not yet documented in the stable SDK surface area. It’s likely coming in a 13.x release, but don’t plan around it until it ships.

The vector search integration is Postgres-only for now. If you’re on MySQL or MariaDB, whereVectorSimilarTois not available. For those stacks, external vector stores (Pinecone, Qdrant, Weaviate) remain the path, and you’ll need your own integration layer.

Final Thoughts

Laravel 13 is not a rewrite. It does not break your application. What it does is establish a new baseline for what “standard Laravel AI work” looks like — and that baseline is considerably higher than it was under Laravel 12. The teams that adopt this now and build their AI features against SDK contracts rather than raw provider clients will have significantly less migration debt when Laravel 14 arrives and extends these primitives further.

Get on PHP 8.3, publish config/ai.php, and start moving your AI layer onto the SDK. The upgrade cost is low. The cost of not doing it compounds.

For the official release notes and full SDK documentation, see the Laravel 13 Release Notes and the Laravel AI SDK documentation directly.

Top comments (0)