DEV Community

Jha Faya (Botender)
Jha Faya (Botender)

Posted on

Why I Built a Business Content Layer on Top of Laravel AI SDK

I tried laravel/ai when it came out. The SDK is well-designed — clean provider abstraction, good DX. But the moment I tried to use it for real business content generation, I ran into the same problem every time. There's no business layer.

You get the LLM call. Everything else — presets, context injection, output structure, tone control, anti-hallucination enforcement — you build yourself. And you rebuild it on every project.

After the third time setting up the same prompt scaffolding for a payment reminder feature, I stopped and packaged it.
This is what I built, why I made the decisions I made, and how it works.


The problem with raw LLM calls in business apps

Here's what generating a payment reminder looks like with laravel/ai directly:

$ai = app(\Laravel\Ai\Contracts\Ai::class);

$response = $ai->text(
    "You are a professional business assistant. Generate a payment reminder 
     email for a client. The invoice number is #1042. It is 30 days overdue. 
     The amount is 1500 EUR. The client name is Jean Martin. 
     Our company is Acme Corp.

     Return a JSON object with: subject, message, call_to_action.
     Do not invent any information not provided above.
     Use a firm but professional tone.
     Language: French."
);

// Now parse the response...
// Hope it returned valid JSON...
// Hope it didn't invent an amount...
// Hope the tone is right...
Enter fullscreen mode Exit fullscreen mode

This works. Until it doesn't.

The problems stack up quickly in production:

  • You hardcode context into prompts — no reusable structure
  • Output parsing is fragile — the model occasionally returns prose instead of JSON
  • Anti-hallucination is a best-effort instruction, not enforced
  • Tone, language, audience — all manual every time
  • No logging — you have no idea what was generated, when, or for which tenant

And you rewrite all of this. Every project. Every feature.


What I wanted instead

A layer that handles the business concerns, so I just describe what I want:

$response = BusinessAssistant::generate(new AssistantRequestData(
    task: AssistantTask::Email,
    preset: 'payment_reminder',
    goal: 'Invoice #1042, 30 days overdue, second reminder',
    language: 'fr',
    tone: Tone::Direct,
    context: [
        'company'  => true,
        'customer' => ['id' => 42],
        'billing'  => ['invoice_id' => 99],
    ],
));

echo $response->subject;        // "Rappel : Facture #1042 en attente de règlement"
echo $response->message;        // Full professional email body
echo $response->call_to_action; // "Procéder au règlement"
Enter fullscreen mode Exit fullscreen mode

No prompt writing. No output parsing. No hallucination guard setup.


The architecture

The engine is a pipeline of contracts. Every step is an interface — you can
swap any layer without touching the rest.

AssistantRequestData
        │
        ▼
RequestValidator        — validates input fields
        │
        ▼
PresetRepository        — resolves the matching preset
        │
        ▼
ContextResolver         — calls your ContextProviders
        │
        ▼
ContextSanitizer        — strips sensitive keys, limits depth
        │
        ▼
PromptBuilder           — assembles system + user prompts
        │
        ▼
TextGenerator           — calls laravel/ai SDK
        │
        ▼
OutputNormalizer        — parses structured JSON response
        │
        ▼
GenerationLogger        — records to assistant_generations (best-effort)
        │
        ▼
AssistantResponseData
Enter fullscreen mode Exit fullscreen mode

The key design decision: the engine never knows about your data model.
It receives arrays. You decide what goes in.


ContextProviders — the bridge between your DB and the engine

This is the part I'm most proud of. Instead of hardcoding data into prompts,
you write simple provider classes:

final class CustomerContextProvider implements ContextProvider
{
    public function key(): string
    {
        return 'customer';
    }

    public function provide(AssistantRequestData $request, array $input = []): array
    {
        $customer = Customer::find($input['id']);

        return [
            'name'  => $customer->full_name,
            'email' => $customer->email,
            'plan'  => $customer->plan,
            'since' => $customer->created_at->format('Y'),
        ];
    }
}
Enter fullscreen mode Exit fullscreen mode

Register it once in config — every generation call auto-injects the right data.
The engine handles sanitization, stripping sensitive keys before anything
reaches the prompt.


Anti-hallucination — enforced structurally

The biggest problem with AI in business apps: the model invents things.

A payment reminder that invents an amount. An appointment confirmation
that invents a time. I learned this the hard way on a billing feature —
the model confidently generated "your invoice of €2,340" when the actual
amount was €1,500.

The solution is structural: if the data isn't in the context, it can't
reach the prompt
. Three layers enforce this — ContextSanitizer, preset
system prompt constraints, and OutputNormalizer validation.


14 presets — what's included

email                    — Professional business email
reply                    — Contextual client reply
payment_reminder         — Overdue invoice reminder
appointment_confirmation — Appointment details + confirmation
support_reply            — Empathetic support response
follow_up                — Post-meeting follow-up
reminder                 — Generic reminder
announcement             — Company/product announcement
promotion                — Commercial promotional email
social_post              — Publishable social media post
internal_note            — Internal team memo
summary                  — Document or interaction summary
customer_summary         — Customer briefing for agents
rewrite                  — Text reformulation
Enter fullscreen mode Exit fullscreen mode

Real examples

SaaS billing — second payment reminder

BusinessAssistant::generate(new AssistantRequestData(
    task: AssistantTask::Email,
    preset: 'payment_reminder',
    goal: 'Second reminder. Invoice INV-2024-0112, 30 days overdue.',
    tone: Tone::Direct,
    language: 'fr',
    context: [
        'company'  => true,
        'customer' => ['id' => 14],
        'billing'  => ['invoice_id' => 5501],
    ],
));
Enter fullscreen mode Exit fullscreen mode

Driving school — lesson confirmation

BusinessAssistant::generate(new AssistantRequestData(
    task: AssistantTask::Email,
    preset: 'appointment_confirmation',
    goal: 'Confirm driving lesson. Remind student to bring permit.',
    tone: Tone::Friendly,
    language: 'fr',
    context: [
        'company'     => true,
        'customer'    => ['id' => 88],
        'appointment' => ['lesson_id' => 334],
    ],
));
Enter fullscreen mode Exit fullscreen mode

CRM support — billing dispute

BusinessAssistant::generate(new AssistantRequestData(
    task: AssistantTask::Reply,
    preset: 'support_reply',
    goal: 'Client was billed twice. Acknowledge, apologize, confirm investigation.',
    tone: Tone::Reassuring,
    language: 'en',
    context: [
        'company'   => true,
        'customer'  => ['id' => 42],
        'documents' => ['ticket_id' => 1091],
    ],
));
Enter fullscreen mode Exit fullscreen mode

Provider flexibility

Switch providers with a single .env change — no code modification:

# Anthropic Claude
BUSINESS_ASSISTANT_PROVIDER=anthropic
BUSINESS_ASSISTANT_MODEL=claude-haiku-4-5-20251001

# OpenAI
BUSINESS_ASSISTANT_PROVIDER=openai
BUSINESS_ASSISTANT_MODEL=gpt-4o-mini

# Local Ollama
BUSINESS_ASSISTANT_PROVIDER=ollama
BUSINESS_ASSISTANT_MODEL=qwen2.5:3b
Enter fullscreen mode Exit fullscreen mode

Multi-tenant support

Built inside a multi-tenant ERP, so tenant scoping is first-class:

$request = new AssistantRequestData(
    task: AssistantTask::Email,
    goal: '...',
    userIdentifier:   (string) auth()->id(),
    tenantIdentifier: (string) $tenant->id,
);
Enter fullscreen mode Exit fullscreen mode

Every generation is logged to assistant_generations with tenant and user
identifiers — ready for quota tracking and per-tenant reporting.


Debug without calling the LLM

Preview the full prompt without making an API call:

$prompt = BusinessAssistant::preview($request);

echo $prompt->systemPrompt;  // full system instructions
echo $prompt->userPrompt;    // user prompt with injected context
Enter fullscreen mode Exit fullscreen mode

Installation

composer require fsdev/laravel-business-assistant
php artisan vendor:publish --tag=business-assistant-config
php artisan migrate
php artisan business-assistant:doctor
php artisan business-assistant:demo
Enter fullscreen mode Exit fullscreen mode

Honest about what this is

Commercial package, not MIT. Lifetime release — Laravel 12, laravel/ai ~0.2.6,
buy once, no updates guaranteed. Not a replacement for laravel/ai or Prism —
sits on top and handles the business layer they deliberately don't.

payhip.com/b/TkFob — €49 solo / €149 agency

Happy to answer questions about the architecture or the preset system in the comments.


Built by Fsdev — tematahotoa.tini@gmail.com

Top comments (0)