DEV Community

Cover image for Cache::funnel() in Laravel: Concurrency Limiting Without Redis
Hafiz
Hafiz

Posted on • Originally published at hafiz.dev

Cache::funnel() in Laravel: Concurrency Limiting Without Redis

Originally published at hafiz.dev


Picture a queue of jobs that each call an external AI provider. Your API key allows 5 concurrent requests. Any more and you start getting 429s. Any fewer and you're leaving throughput on the table.

The classic Laravel solution was Redis::funnel(). Which meant you needed Redis. Not great when your project runs on the file or database cache driver. And genuinely painful in tests, where you either had to mock the Redis facade (fragile), spin up a real Redis instance in CI (annoying), or skip testing the concurrency logic altogether (common, but not ideal).

Laravel 12.53.0 shipped Cache::funnel(). Same concept, same API shape, but backed by any lock-capable cache driver. File, database, Redis, or the array driver in tests. Your concurrency logic stops being coupled to your infrastructure choice.

Here's what it does, how to use it, and when it actually matters.

A Quick Recap of What Redis::funnel() Was Doing

Before the new API makes sense, it's worth understanding the old one.

Redis::funnel() used Redis Lua scripts to manage a pool of execution slots atomically. You'd define a key for the resource you wanted to protect, set a limit on concurrent executions, and the Lua script handled the semaphore logic on the Redis side.

Redis::funnel('ai-api-calls')
    ->limit(5)
    ->releaseAfter(120)
    ->block(30)
    ->then(function () {
        // at most 5 of these running at once
    }, function () {
        $this->release(60);
    });
Enter fullscreen mode Exit fullscreen mode

It worked well. Still works well. But you couldn't use it without Redis, and you couldn't test it without Redis either. That's the friction Cache::funnel() resolves.

The New API

Cache::funnel() lives on the Cache facade and uses the cache layer's lock primitives rather than Redis-specific scripts. Any driver implementing LockProvider works: database, file, Redis, and the array driver you're probably already using in tests.

use Illuminate\Support\Facades\Cache;

Cache::funnel('ai-api-calls')
    ->limit(5)
    ->releaseAfter(120)
    ->block(30)
    ->then(function () {
        // slot acquired
    }, function () {
        // couldn't get a slot within block time
    });
Enter fullscreen mode Exit fullscreen mode

If you've used Redis::funnel() before, this reads identically. That's intentional. Let's break down each method so nothing is ambiguous.

limit()

Sets the maximum number of concurrent executions that can hold a slot. ->limit(5) means at most 5 closures are running simultaneously. The 6th caller blocks or fails, depending on your block() setting.

releaseAfter()

The safety TTL, in seconds. If a process acquires a slot and then crashes or gets killed before finishing, the slot auto-expires after this many seconds. It's not a timeout for how long your work should take. Think of it as a dead-man's switch so slots don't stay locked forever after a crash.

Set this realistically. If your job can legitimately take four minutes, a releaseAfter(60) means crashed processes release slots after one minute and new executions grab them before previous ones are done. When in doubt, overestimate.

block()

How long a caller should wait for a slot to become available before giving up. ->block(30) means wait up to 30 seconds. ->block(0) means don't wait at all: try once, and if no slot is available right now, immediately run the failure callback.

then()

Two callables. The first runs when a slot is acquired. The second runs when the block time expires without getting one. The slot releases automatically when the first callable returns, so the next waiting caller can grab it.

If you'd rather handle failure via exceptions than a callback, leave the second argument out:

use Illuminate\Cache\Limiters\LimiterTimeoutException;

try {
    Cache::funnel('ai-api-calls')
        ->limit(5)
        ->releaseAfter(120)
        ->block(30)
        ->then(function () {
            // runs with a slot
        });
} catch (LimiterTimeoutException $e) {
    // block time expired, no slot acquired
    Log::warning('Could not acquire concurrency slot', ['key' => 'ai-api-calls']);
}
Enter fullscreen mode Exit fullscreen mode

And if you need a specific store rather than the default:

Cache::store('database')->funnel('ai-api-calls')
    ->limit(5)
    ->releaseAfter(120)
    ->block(30)
    ->then(function () {
        // locked to the database store explicitly
    });
Enter fullscreen mode Exit fullscreen mode

One thing to flag upfront: Memcached doesn't implement LockProvider. Calling Cache::funnel() on a Memcached-backed store throws BadMethodCallException. If Memcached is your default driver, call Cache::store('database')->funnel() or Cache::store('redis')->funnel() explicitly instead.

Here's how the full slot acquisition flow works when you call Cache::funnel():

View the interactive diagram on hafiz.dev

The key thing to notice: the slot releases automatically when the success closure returns. You don't call anything manually. And if a process crashes before returning, releaseAfter handles the cleanup on a timer.

Use Case 1: Throttling External API Calls in Queue Jobs

This is the most common reason to reach for concurrency limiting. You have a pool of jobs calling an external service and you need to stay within its concurrency cap.

If you're not on Laravel 12.53.0 yet, check the upgrade guide first, since Cache::funnel() isn't available in earlier versions.

Here's a queue job that limits itself to 5 concurrent calls against an external AI API:

class ProcessAiRequest implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 10;
    public int $backoff = 60;

    public function __construct(private readonly string $prompt) {}

    public function handle(): void
    {
        Cache::funnel('ai-api-concurrency')
            ->limit(5)
            ->releaseAfter(120)
            ->block(30)
            ->then(
                function () {
                    $response = Http::withToken(config('services.ai.key'))
                        ->timeout(90)
                        ->post('https://api.example.com/v1/generate', [
                            'prompt' => $this->prompt,
                        ]);

                    // process $response
                },
                function () {
                    // no slot within 30 seconds, re-queue
                    $this->release(60);
                }
            );
    }
}
Enter fullscreen mode Exit fullscreen mode

The slot releases the moment the closure returns, so the next waiting job can grab it. If the job crashes mid-execution, the slot releases after 120 seconds. Workers just keep pulling from the queue and the funnel manages the cap invisibly.

The funnel key 'ai-api-concurrency' is global here, meaning the limit applies across all workers and all processes combined. That's usually exactly what you want when limiting against a shared external resource with a fixed API key.

If you're structuring more complex pipelines with different agent types, the multi-agent queue patterns post covers how to approach those. Per-agent-type limits fit naturally into that kind of setup, where you'd just make the key more specific: "ai-concurrency:{$agentType}".

Use Case 2: Per-User Report Generation

Classic SaaS problem. Users can kick off report generation, and you want at most 2 running per user at once. More than that and the server starts to feel it. Less than that is fine: just tell the user their report is queued.

public function generateReport(User $user, Report $report): void
{
    Cache::funnel("report-generation:{$user->id}")
        ->limit(2)
        ->releaseAfter(300)
        ->block(0)
        ->then(
            function () use ($report) {
                $report->generate();
            },
            function () use ($report) {
                $report->markAsQueued();
                UserNotification::send($report->user, 'Your report is queued and will start shortly.');
            }
        );
}
Enter fullscreen mode Exit fullscreen mode

->block(0) is intentional. You don't want to hold up the current request waiting for a slot. If both slots are taken, fall into the failure callback immediately and handle it gracefully.

The funnel key includes the user ID, so each user gets their own independent slot pool. One user hammering the generate button a dozen times doesn't affect anyone else's quota. That's the real value: fine-grained per-entity control with very little code.

This kind of control complements the queue topology patterns covered in the queue jobs post. You can configure your workers to run a large number of concurrent jobs at the queue level, then use Cache::funnel() inside jobs to apply more targeted limits based on the resource being accessed.

Use Case 3: Testing Without Infrastructure

This is the improvement I'm most excited about, and the one the PHP community noticed quickest when the PR landed.

Before Cache::funnel(), testing concurrency logic meant wrestling with infrastructure. Your choices with Redis::funnel() were: mock the Redis facade (you're testing your mock, not your logic), spin up Redis in CI (more setup, more cost), or skip testing it at all (the most honest option, and the most common).

With the array driver, all of that goes away. Your concurrency logic tests run in pure in-memory isolation with zero infrastructure:

use Illuminate\Support\Facades\Cache;

it('blocks executions beyond the concurrency limit', function () {
    $acquired = 0;
    $blocked = 0;

    for ($i = 0; $i < 5; $i++) {
        Cache::store('array')->funnel('test-resource')
            ->limit(2)
            ->releaseAfter(60)
            ->block(0)
            ->then(
                function () use (&$acquired) { $acquired++; },
                function () use (&$blocked) { $blocked++; }
            );
    }

    expect($acquired)->toBe(2);
    expect($blocked)->toBe(3);
});
Enter fullscreen mode Exit fullscreen mode

No Redis connection, no Docker service, no special test helpers. The test proves your concurrency logic works correctly and runs in milliseconds.

This testing story alone is a compelling reason to migrate away from Redis::funnel() in any application where you actually want to test this kind of behaviour.

Using It in Job Middleware

So far all the examples put the funnel logic inside handle(). That works, but there's a cleaner pattern for queue jobs: defining it in the job's middleware() method.

use Illuminate\Support\Facades\Cache;

class ProcessAiRequest implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function middleware(): array
    {
        return [
            function ($job, $next) {
                Cache::funnel('ai-api-concurrency')
                    ->limit(5)
                    ->releaseAfter(120)
                    ->block(0)
                    ->then(
                        fn () => $next($job),
                        fn () => $job->release(60)
                    );
            },
        ];
    }

    public function handle(): void
    {
        // handle() only runs if the funnel granted a slot
        $response = Http::withToken(config('services.ai.key'))
            ->post('https://api.example.com/v1/generate', [
                'prompt' => $this->prompt,
            ]);
    }
}
Enter fullscreen mode Exit fullscreen mode

The advantage here is separation of concerns. The concurrency limit is declared alongside the job's other infrastructure concerns (retries, backoff, timeout) rather than buried inside the business logic. The handle() method stays focused on what the job actually does.

This pattern is especially useful when multiple job types need the same concurrency limit. You can extract the middleware closure into a shared class and reference it from each job rather than duplicating the funnel configuration everywhere.

Cache::funnel() vs Cache::withoutOverlapping()

Both live in the concurrency limiting section of the Laravel docs and it's easy to confuse them. They solve different problems.

Cache::withoutOverlapping() is for single-instance control: only one execution at a time, globally. Use this for scheduled commands where two copies running simultaneously would be a bug.

Cache::withoutOverlapping('nightly-data-sync', function () {
    DataSync::runAll();
});
Enter fullscreen mode Exit fullscreen mode

Cache::funnel() is for controlled parallelism: up to N concurrent executions, but not unlimited. Use this when some concurrency is fine, you just need a ceiling on it.

The classic CS framing: withoutOverlapping is a mutex (one at a time), funnel is a semaphore (N at a time). Reach for withoutOverlapping when concurrency is always wrong for the operation. Reach for funnel when it's acceptable but needs a cap.

Should You Migrate Away From Redis::funnel()?

If you're already using Redis::funnel() and it works, nothing is broken and nothing is deprecated. There's no urgent reason to change.

The migration itself is a straight swap at each call site:

// Before
Redis::funnel('ai-api-calls')
    ->limit(5)
    ->releaseAfter(120)
    ->block(30)
    ->then(fn () => $this->callApi(), fn () => $this->release(60));

// After
Cache::funnel('ai-api-calls')
    ->limit(5)
    ->releaseAfter(120)
    ->block(30)
    ->then(fn () => $this->callApi(), fn () => $this->release(60));
Enter fullscreen mode Exit fullscreen mode

Just the facade changes. Everything else is identical. If you're on Redis in production, the behaviour is equivalent since Cache::funnel() delegates to the same lock primitives on the Redis driver.

That said, there are three concrete reasons to prefer Cache::funnel() going forward.

First, no hard Redis dependency. If your cache driver ever changes, your concurrency logic moves with it. No code changes, no surprise errors in a staging environment that uses a different driver than production.

Second, it's testable without infrastructure. Array driver in tests, real driver in production. You write tests that actually exercise the concurrency logic rather than mocking around it.

Third, it's one fewer abstraction to maintain. Redis::funnel() lives in the Redis documentation. Cache::funnel() lives in the Cache documentation. One facade, one section of the docs, one mental model for your team.

The underlying mechanism differs: Redis::funnel() uses Lua scripts, Cache::funnel() uses the lock primitives the cache driver exposes. But for application-level concurrency control, the behaviour is equivalent and the difference is invisible to your application code.

Things to Watch Out For

Memcached is unsupported. If your default cache driver is Memcached, Cache::funnel() throws BadMethodCallException. Either switch to a supported driver or call a specific store like Cache::store('database')->funnel().

Set releaseAfter conservatively. If the work could legitimately take five minutes, a 60-second TTL means crashed processes release slots before work finishes and new ones grab them. You end up with more concurrent executions than your limit() intended. Overestimate rather than underestimate.

block(0) needs a failure handler you actually wrote. With zero wait time, any call that doesn't immediately get a slot hits the failure callback. Returning nothing silently from that callback is almost always wrong. Re-queue the job, notify the user, log a warning, or do something intentional.

This is application-level, not queue-level. Cache::funnel() doesn't configure Horizon or tell your queue supervisor to run fewer concurrent workers. If you need to control queue-level concurrency, that's a separate configuration. These are complementary tools.

The key is global across all workers. The funnel key 'api-calls' applies across every worker process and every server. That's what you want when the limit comes from a shared external resource. If you need per-server limits, scope the key: "api-calls:{$serverId}".

FAQ

Which Laravel version introduced Cache::funnel()?

It was merged February 21, 2026 and shipped in Laravel 12.53.0. It's also in Laravel 13 from the initial release.

Which cache drivers support it?

Redis, database, file, and array drivers all implement LockProvider and work with Cache::funnel(). Memcached does not, and calling funnel() on it throws BadMethodCallException.

What happens if a process crashes mid-execution?

The slot auto-releases after the releaseAfter timeout. Set it long enough to cover legitimate execution time, otherwise crashed processes release slots early and new executions start before previous ones are done.

Can I skip the failure callback?

Yes. Omit the second argument to ->then() and LimiterTimeoutException is thrown when block time expires without getting a slot. Useful if you'd rather handle failure in a catch block than a closure.

Is this the same as rate limiting middleware?

No. The throttle middleware controls request frequency: how many requests per minute from a given client. Cache::funnel() controls concurrent executions: how many can run at the same time. Different problems, different tools.

Can I use this in a scheduled command?

Yes, but Cache::withoutOverlapping() is usually the better fit for commands where you want zero overlap. Use Cache::funnel() when some parallelism is fine but you need a specific cap.

Wrapping Up

Cache::funnel() is one of those changes that quietly fixes a real friction point. Concurrency logic in Laravel shouldn't require Redis. Tests for that logic shouldn't require infrastructure.

If you've been relying on Redis::funnel(), the migration is straightforward: same method chain, different facade call. If you've been avoiding concurrency limiting because the Redis dependency was awkward or the testing story was painful, those excuses are gone now.

The pattern is genuinely useful once you start seeing where it applies. Per-user limits, per-resource limits, per-API-key throttling. Most queue-heavy features have at least one spot where Cache::funnel() simplifies the code and removes a dependency you didn't need.

Questions or edge cases I didn't cover? Get in touch and we can dig into it.

Top comments (0)